Jan 23 09:01:20 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 23 09:01:20 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 23 09:01:20 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 09:01:20 localhost kernel: BIOS-provided physical RAM map:
Jan 23 09:01:20 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 23 09:01:20 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 23 09:01:20 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 23 09:01:20 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 23 09:01:20 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 23 09:01:20 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 23 09:01:20 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 23 09:01:20 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 23 09:01:20 localhost kernel: NX (Execute Disable) protection: active
Jan 23 09:01:20 localhost kernel: APIC: Static calls initialized
Jan 23 09:01:20 localhost kernel: SMBIOS 2.8 present.
Jan 23 09:01:20 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 23 09:01:20 localhost kernel: Hypervisor detected: KVM
Jan 23 09:01:20 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 23 09:01:20 localhost kernel: kvm-clock: using sched offset of 3143031311 cycles
Jan 23 09:01:20 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 23 09:01:20 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 23 09:01:20 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 23 09:01:20 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 23 09:01:20 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 23 09:01:20 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 23 09:01:20 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 23 09:01:20 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 23 09:01:20 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 23 09:01:20 localhost kernel: Using GB pages for direct mapping
Jan 23 09:01:20 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 23 09:01:20 localhost kernel: ACPI: Early table checksum verification disabled
Jan 23 09:01:20 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 23 09:01:20 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 09:01:20 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 09:01:20 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 09:01:20 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 23 09:01:20 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 09:01:20 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 09:01:20 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 23 09:01:20 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 23 09:01:20 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 23 09:01:20 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 23 09:01:20 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 23 09:01:20 localhost kernel: No NUMA configuration found
Jan 23 09:01:20 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 23 09:01:20 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 23 09:01:20 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 23 09:01:20 localhost kernel: Zone ranges:
Jan 23 09:01:20 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 23 09:01:20 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 23 09:01:20 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 23 09:01:20 localhost kernel:   Device   empty
Jan 23 09:01:20 localhost kernel: Movable zone start for each node
Jan 23 09:01:20 localhost kernel: Early memory node ranges
Jan 23 09:01:20 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 23 09:01:20 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 23 09:01:20 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 23 09:01:20 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 23 09:01:20 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 23 09:01:20 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 23 09:01:20 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 23 09:01:20 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 23 09:01:20 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 23 09:01:20 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 23 09:01:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 23 09:01:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 23 09:01:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 23 09:01:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 23 09:01:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 23 09:01:20 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 23 09:01:20 localhost kernel: TSC deadline timer available
Jan 23 09:01:20 localhost kernel: CPU topo: Max. logical packages:   8
Jan 23 09:01:20 localhost kernel: CPU topo: Max. logical dies:       8
Jan 23 09:01:20 localhost kernel: CPU topo: Max. dies per package:   1
Jan 23 09:01:20 localhost kernel: CPU topo: Max. threads per core:   1
Jan 23 09:01:20 localhost kernel: CPU topo: Num. cores per package:     1
Jan 23 09:01:20 localhost kernel: CPU topo: Num. threads per package:   1
Jan 23 09:01:20 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 23 09:01:20 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 23 09:01:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 23 09:01:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 23 09:01:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 23 09:01:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 23 09:01:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 23 09:01:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 23 09:01:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 23 09:01:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 23 09:01:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 23 09:01:20 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 23 09:01:20 localhost kernel: Booting paravirtualized kernel on KVM
Jan 23 09:01:20 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 23 09:01:20 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 23 09:01:20 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 23 09:01:20 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 23 09:01:20 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 23 09:01:20 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 23 09:01:20 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 09:01:20 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 23 09:01:20 localhost kernel: random: crng init done
Jan 23 09:01:20 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 23 09:01:20 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 23 09:01:20 localhost kernel: Fallback order for Node 0: 0 
Jan 23 09:01:20 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 23 09:01:20 localhost kernel: Policy zone: Normal
Jan 23 09:01:20 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 23 09:01:20 localhost kernel: software IO TLB: area num 8.
Jan 23 09:01:20 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 23 09:01:20 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 23 09:01:20 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 23 09:01:20 localhost kernel: Dynamic Preempt: voluntary
Jan 23 09:01:20 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 23 09:01:20 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 23 09:01:20 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 23 09:01:20 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 23 09:01:20 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 23 09:01:20 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 23 09:01:20 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 23 09:01:20 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 23 09:01:20 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 09:01:20 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 09:01:20 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 09:01:20 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 23 09:01:20 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 23 09:01:20 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 23 09:01:20 localhost kernel: Console: colour VGA+ 80x25
Jan 23 09:01:20 localhost kernel: printk: console [ttyS0] enabled
Jan 23 09:01:20 localhost kernel: ACPI: Core revision 20230331
Jan 23 09:01:20 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 23 09:01:20 localhost kernel: x2apic enabled
Jan 23 09:01:20 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 23 09:01:20 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 23 09:01:20 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 23 09:01:20 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 23 09:01:20 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 23 09:01:20 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 23 09:01:20 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 23 09:01:20 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 23 09:01:20 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 23 09:01:20 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 23 09:01:20 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 23 09:01:20 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 23 09:01:20 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 23 09:01:20 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 23 09:01:20 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 23 09:01:20 localhost kernel: x86/bugs: return thunk changed
Jan 23 09:01:20 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 23 09:01:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 23 09:01:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 23 09:01:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 23 09:01:20 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 23 09:01:20 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 23 09:01:20 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 23 09:01:20 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 23 09:01:20 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 23 09:01:20 localhost kernel: landlock: Up and running.
Jan 23 09:01:20 localhost kernel: Yama: becoming mindful.
Jan 23 09:01:20 localhost kernel: SELinux:  Initializing.
Jan 23 09:01:20 localhost kernel: LSM support for eBPF active
Jan 23 09:01:20 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 23 09:01:20 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 23 09:01:20 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 23 09:01:20 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 23 09:01:20 localhost kernel: ... version:                0
Jan 23 09:01:20 localhost kernel: ... bit width:              48
Jan 23 09:01:20 localhost kernel: ... generic registers:      6
Jan 23 09:01:20 localhost kernel: ... value mask:             0000ffffffffffff
Jan 23 09:01:20 localhost kernel: ... max period:             00007fffffffffff
Jan 23 09:01:20 localhost kernel: ... fixed-purpose events:   0
Jan 23 09:01:20 localhost kernel: ... event mask:             000000000000003f
Jan 23 09:01:20 localhost kernel: signal: max sigframe size: 1776
Jan 23 09:01:20 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 23 09:01:20 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 23 09:01:20 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 23 09:01:20 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 23 09:01:20 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 23 09:01:20 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 23 09:01:20 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 23 09:01:20 localhost kernel: node 0 deferred pages initialised in 9ms
Jan 23 09:01:20 localhost kernel: Memory: 7763792K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 23 09:01:20 localhost kernel: devtmpfs: initialized
Jan 23 09:01:20 localhost kernel: x86/mm: Memory block size: 128MB
Jan 23 09:01:20 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 23 09:01:20 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 23 09:01:20 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 23 09:01:20 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 23 09:01:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 23 09:01:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 23 09:01:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 23 09:01:20 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 23 09:01:20 localhost kernel: audit: type=2000 audit(1769158878.333:1): state=initialized audit_enabled=0 res=1
Jan 23 09:01:20 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 23 09:01:20 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 23 09:01:20 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 23 09:01:20 localhost kernel: cpuidle: using governor menu
Jan 23 09:01:20 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 23 09:01:20 localhost kernel: PCI: Using configuration type 1 for base access
Jan 23 09:01:20 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 23 09:01:20 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 23 09:01:20 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 23 09:01:20 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 23 09:01:20 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 23 09:01:20 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 23 09:01:20 localhost kernel: Demotion targets for Node 0: null
Jan 23 09:01:20 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 23 09:01:20 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 23 09:01:20 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 23 09:01:20 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 23 09:01:20 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 23 09:01:20 localhost kernel: ACPI: Interpreter enabled
Jan 23 09:01:20 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 23 09:01:20 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 23 09:01:20 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 23 09:01:20 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 23 09:01:20 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 23 09:01:20 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 23 09:01:20 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [3] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [4] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [5] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [6] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [7] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [8] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [9] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [10] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [11] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [12] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [13] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [14] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [15] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [16] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [17] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [18] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [19] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [20] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [21] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [22] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [23] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [24] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [25] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [26] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [27] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [28] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [29] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [30] registered
Jan 23 09:01:20 localhost kernel: acpiphp: Slot [31] registered
Jan 23 09:01:20 localhost kernel: PCI host bridge to bus 0000:00
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 23 09:01:20 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 23 09:01:20 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 23 09:01:20 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 23 09:01:20 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 23 09:01:20 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 23 09:01:20 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 23 09:01:20 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 23 09:01:20 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 23 09:01:20 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 23 09:01:20 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 23 09:01:20 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 23 09:01:20 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 23 09:01:20 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 23 09:01:20 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 23 09:01:20 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 23 09:01:20 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 23 09:01:20 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 23 09:01:20 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 23 09:01:20 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 23 09:01:20 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 23 09:01:20 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 23 09:01:20 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 23 09:01:20 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 23 09:01:20 localhost kernel: iommu: Default domain type: Translated
Jan 23 09:01:20 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 23 09:01:20 localhost kernel: SCSI subsystem initialized
Jan 23 09:01:20 localhost kernel: ACPI: bus type USB registered
Jan 23 09:01:20 localhost kernel: usbcore: registered new interface driver usbfs
Jan 23 09:01:20 localhost kernel: usbcore: registered new interface driver hub
Jan 23 09:01:20 localhost kernel: usbcore: registered new device driver usb
Jan 23 09:01:20 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 23 09:01:20 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 23 09:01:20 localhost kernel: PTP clock support registered
Jan 23 09:01:20 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 23 09:01:20 localhost kernel: NetLabel: Initializing
Jan 23 09:01:20 localhost kernel: NetLabel:  domain hash size = 128
Jan 23 09:01:20 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 23 09:01:20 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 23 09:01:20 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 23 09:01:20 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 23 09:01:20 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 23 09:01:20 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 23 09:01:20 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 23 09:01:20 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 23 09:01:20 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 23 09:01:20 localhost kernel: vgaarb: loaded
Jan 23 09:01:20 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 23 09:01:20 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 23 09:01:20 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 23 09:01:20 localhost kernel: pnp: PnP ACPI init
Jan 23 09:01:20 localhost kernel: pnp 00:03: [dma 2]
Jan 23 09:01:20 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 23 09:01:20 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 23 09:01:20 localhost kernel: NET: Registered PF_INET protocol family
Jan 23 09:01:20 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 23 09:01:20 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 23 09:01:20 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 23 09:01:20 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 23 09:01:20 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 23 09:01:20 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 23 09:01:20 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 23 09:01:20 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 23 09:01:20 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 23 09:01:20 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 23 09:01:20 localhost kernel: NET: Registered PF_XDP protocol family
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 23 09:01:20 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 23 09:01:20 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 23 09:01:20 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 23 09:01:20 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 75525 usecs
Jan 23 09:01:20 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 23 09:01:20 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 23 09:01:20 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 23 09:01:20 localhost kernel: ACPI: bus type thunderbolt registered
Jan 23 09:01:20 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 23 09:01:20 localhost kernel: Initialise system trusted keyrings
Jan 23 09:01:20 localhost kernel: Key type blacklist registered
Jan 23 09:01:20 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 23 09:01:20 localhost kernel: zbud: loaded
Jan 23 09:01:20 localhost kernel: integrity: Platform Keyring initialized
Jan 23 09:01:20 localhost kernel: integrity: Machine keyring initialized
Jan 23 09:01:20 localhost kernel: Freeing initrd memory: 87956K
Jan 23 09:01:20 localhost kernel: NET: Registered PF_ALG protocol family
Jan 23 09:01:20 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 23 09:01:20 localhost kernel: Key type asymmetric registered
Jan 23 09:01:20 localhost kernel: Asymmetric key parser 'x509' registered
Jan 23 09:01:20 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 23 09:01:20 localhost kernel: io scheduler mq-deadline registered
Jan 23 09:01:20 localhost kernel: io scheduler kyber registered
Jan 23 09:01:20 localhost kernel: io scheduler bfq registered
Jan 23 09:01:20 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 23 09:01:20 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 23 09:01:20 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 23 09:01:20 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 23 09:01:20 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 23 09:01:20 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 23 09:01:20 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 23 09:01:20 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 23 09:01:20 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 23 09:01:20 localhost kernel: Non-volatile memory driver v1.3
Jan 23 09:01:20 localhost kernel: rdac: device handler registered
Jan 23 09:01:20 localhost kernel: hp_sw: device handler registered
Jan 23 09:01:20 localhost kernel: emc: device handler registered
Jan 23 09:01:20 localhost kernel: alua: device handler registered
Jan 23 09:01:20 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 23 09:01:20 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 23 09:01:20 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 23 09:01:20 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 23 09:01:20 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 23 09:01:20 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 09:01:20 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 23 09:01:20 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 23 09:01:20 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 23 09:01:20 localhost kernel: hub 1-0:1.0: USB hub found
Jan 23 09:01:20 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 23 09:01:20 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 23 09:01:20 localhost kernel: usbserial: USB Serial support registered for generic
Jan 23 09:01:20 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 23 09:01:20 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 23 09:01:20 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 23 09:01:20 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 23 09:01:20 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 23 09:01:20 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 23 09:01:20 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 23 09:01:20 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T09:01:19 UTC (1769158879)
Jan 23 09:01:20 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 23 09:01:20 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 23 09:01:20 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 23 09:01:20 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 23 09:01:20 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 23 09:01:20 localhost kernel: usbcore: registered new interface driver usbhid
Jan 23 09:01:20 localhost kernel: usbhid: USB HID core driver
Jan 23 09:01:20 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 23 09:01:20 localhost kernel: Initializing XFRM netlink socket
Jan 23 09:01:20 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 23 09:01:20 localhost kernel: Segment Routing with IPv6
Jan 23 09:01:20 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 23 09:01:20 localhost kernel: mpls_gso: MPLS GSO support
Jan 23 09:01:20 localhost kernel: IPI shorthand broadcast: enabled
Jan 23 09:01:20 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 23 09:01:20 localhost kernel: AES CTR mode by8 optimization enabled
Jan 23 09:01:20 localhost kernel: sched_clock: Marking stable (1127003019, 142770270)->(1373792049, -104018760)
Jan 23 09:01:20 localhost kernel: registered taskstats version 1
Jan 23 09:01:20 localhost kernel: Loading compiled-in X.509 certificates
Jan 23 09:01:20 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 23 09:01:20 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 23 09:01:20 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 23 09:01:20 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 23 09:01:20 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 23 09:01:20 localhost kernel: Demotion targets for Node 0: null
Jan 23 09:01:20 localhost kernel: page_owner is disabled
Jan 23 09:01:20 localhost kernel: Key type .fscrypt registered
Jan 23 09:01:20 localhost kernel: Key type fscrypt-provisioning registered
Jan 23 09:01:20 localhost kernel: Key type big_key registered
Jan 23 09:01:20 localhost kernel: Key type encrypted registered
Jan 23 09:01:20 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 23 09:01:20 localhost kernel: Loading compiled-in module X.509 certificates
Jan 23 09:01:20 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 23 09:01:20 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 23 09:01:20 localhost kernel: ima: No architecture policies found
Jan 23 09:01:20 localhost kernel: evm: Initialising EVM extended attributes:
Jan 23 09:01:20 localhost kernel: evm: security.selinux
Jan 23 09:01:20 localhost kernel: evm: security.SMACK64 (disabled)
Jan 23 09:01:20 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 23 09:01:20 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 23 09:01:20 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 23 09:01:20 localhost kernel: evm: security.apparmor (disabled)
Jan 23 09:01:20 localhost kernel: evm: security.ima
Jan 23 09:01:20 localhost kernel: evm: security.capability
Jan 23 09:01:20 localhost kernel: evm: HMAC attrs: 0x1
Jan 23 09:01:20 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 23 09:01:20 localhost kernel: Running certificate verification RSA selftest
Jan 23 09:01:20 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 23 09:01:20 localhost kernel: Running certificate verification ECDSA selftest
Jan 23 09:01:20 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 23 09:01:20 localhost kernel: clk: Disabling unused clocks
Jan 23 09:01:20 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 23 09:01:20 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 23 09:01:20 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 23 09:01:20 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 23 09:01:20 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 23 09:01:20 localhost kernel: Run /init as init process
Jan 23 09:01:20 localhost kernel:   with arguments:
Jan 23 09:01:20 localhost kernel:     /init
Jan 23 09:01:20 localhost kernel:   with environment:
Jan 23 09:01:20 localhost kernel:     HOME=/
Jan 23 09:01:20 localhost kernel:     TERM=linux
Jan 23 09:01:20 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 23 09:01:20 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 23 09:01:20 localhost systemd[1]: Detected virtualization kvm.
Jan 23 09:01:20 localhost systemd[1]: Detected architecture x86-64.
Jan 23 09:01:20 localhost systemd[1]: Running in initrd.
Jan 23 09:01:20 localhost systemd[1]: No hostname configured, using default hostname.
Jan 23 09:01:20 localhost systemd[1]: Hostname set to <localhost>.
Jan 23 09:01:20 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 23 09:01:20 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 23 09:01:20 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 23 09:01:20 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 23 09:01:20 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 23 09:01:20 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 23 09:01:20 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 23 09:01:20 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 23 09:01:20 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 23 09:01:20 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 23 09:01:20 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 23 09:01:20 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 23 09:01:20 localhost systemd[1]: Reached target Local File Systems.
Jan 23 09:01:20 localhost systemd[1]: Reached target Path Units.
Jan 23 09:01:20 localhost systemd[1]: Reached target Slice Units.
Jan 23 09:01:20 localhost systemd[1]: Reached target Swaps.
Jan 23 09:01:20 localhost systemd[1]: Reached target Timer Units.
Jan 23 09:01:20 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 23 09:01:20 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 23 09:01:20 localhost systemd[1]: Listening on Journal Socket.
Jan 23 09:01:20 localhost systemd[1]: Listening on udev Control Socket.
Jan 23 09:01:20 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 23 09:01:20 localhost systemd[1]: Reached target Socket Units.
Jan 23 09:01:20 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 23 09:01:20 localhost systemd[1]: Starting Journal Service...
Jan 23 09:01:20 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 23 09:01:20 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 23 09:01:20 localhost systemd[1]: Starting Create System Users...
Jan 23 09:01:20 localhost systemd[1]: Starting Setup Virtual Console...
Jan 23 09:01:20 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 23 09:01:20 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 23 09:01:20 localhost systemd-journald[305]: Journal started
Jan 23 09:01:20 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/f03a036043fd4fa3b4989716505b3cad) is 8.0M, max 153.6M, 145.6M free.
Jan 23 09:01:20 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Jan 23 09:01:20 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Jan 23 09:01:20 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 23 09:01:20 localhost systemd[1]: Started Journal Service.
Jan 23 09:01:20 localhost systemd[1]: Finished Create System Users.
Jan 23 09:01:20 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 23 09:01:20 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 23 09:01:20 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 23 09:01:20 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 23 09:01:20 localhost systemd[1]: Finished Setup Virtual Console.
Jan 23 09:01:20 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 23 09:01:20 localhost systemd[1]: Starting dracut cmdline hook...
Jan 23 09:01:20 localhost dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Jan 23 09:01:20 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 09:01:20 localhost systemd[1]: Finished dracut cmdline hook.
Jan 23 09:01:20 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 23 09:01:20 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 23 09:01:20 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 23 09:01:20 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 23 09:01:20 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 23 09:01:20 localhost kernel: RPC: Registered udp transport module.
Jan 23 09:01:20 localhost kernel: RPC: Registered tcp transport module.
Jan 23 09:01:20 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 23 09:01:20 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 23 09:01:20 localhost rpc.statd[440]: Version 2.5.4 starting
Jan 23 09:01:20 localhost rpc.statd[440]: Initializing NSM state
Jan 23 09:01:20 localhost rpc.idmapd[445]: Setting log level to 0
Jan 23 09:01:20 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 23 09:01:20 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 23 09:01:20 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Jan 23 09:01:20 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 23 09:01:20 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 23 09:01:20 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 23 09:01:20 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 23 09:01:20 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 23 09:01:20 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 23 09:01:20 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 23 09:01:20 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 09:01:20 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 23 09:01:20 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 23 09:01:20 localhost systemd[1]: Reached target Network.
Jan 23 09:01:20 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 23 09:01:20 localhost systemd[1]: Starting dracut initqueue hook...
Jan 23 09:01:21 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 23 09:01:21 localhost systemd-udevd[484]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 09:01:21 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 23 09:01:21 localhost kernel:  vda: vda1
Jan 23 09:01:21 localhost kernel: libata version 3.00 loaded.
Jan 23 09:01:21 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 23 09:01:21 localhost kernel: scsi host0: ata_piix
Jan 23 09:01:21 localhost kernel: scsi host1: ata_piix
Jan 23 09:01:21 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 23 09:01:21 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 23 09:01:21 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 23 09:01:21 localhost systemd[1]: Reached target Initrd Root Device.
Jan 23 09:01:21 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 23 09:01:21 localhost kernel: ata1: found unknown device (class 0)
Jan 23 09:01:21 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 23 09:01:21 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 23 09:01:21 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 23 09:01:21 localhost systemd[1]: Reached target System Initialization.
Jan 23 09:01:21 localhost systemd[1]: Reached target Basic System.
Jan 23 09:01:21 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 23 09:01:21 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 23 09:01:21 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 23 09:01:21 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 23 09:01:21 localhost systemd[1]: Finished dracut initqueue hook.
Jan 23 09:01:21 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 23 09:01:21 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 23 09:01:21 localhost systemd[1]: Reached target Remote File Systems.
Jan 23 09:01:21 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 23 09:01:21 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 23 09:01:21 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 23 09:01:21 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 23 09:01:21 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 23 09:01:21 localhost systemd[1]: Mounting /sysroot...
Jan 23 09:01:21 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 23 09:01:21 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 23 09:01:21 localhost kernel: XFS (vda1): Ending clean mount
Jan 23 09:01:21 localhost systemd[1]: Mounted /sysroot.
Jan 23 09:01:21 localhost systemd[1]: Reached target Initrd Root File System.
Jan 23 09:01:21 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 23 09:01:21 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 23 09:01:21 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 23 09:01:21 localhost systemd[1]: Reached target Initrd File Systems.
Jan 23 09:01:21 localhost systemd[1]: Reached target Initrd Default Target.
Jan 23 09:01:21 localhost systemd[1]: Starting dracut mount hook...
Jan 23 09:01:21 localhost systemd[1]: Finished dracut mount hook.
Jan 23 09:01:21 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 23 09:01:22 localhost rpc.idmapd[445]: exiting on signal 15
Jan 23 09:01:22 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 23 09:01:22 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 23 09:01:22 localhost systemd[1]: Stopped target Network.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Timer Units.
Jan 23 09:01:22 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 23 09:01:22 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Basic System.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Path Units.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Remote File Systems.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Slice Units.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Socket Units.
Jan 23 09:01:22 localhost systemd[1]: Stopped target System Initialization.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Local File Systems.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Swaps.
Jan 23 09:01:22 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped dracut mount hook.
Jan 23 09:01:22 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 23 09:01:22 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 23 09:01:22 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 23 09:01:22 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 23 09:01:22 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 23 09:01:22 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 23 09:01:22 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 23 09:01:22 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 23 09:01:22 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 23 09:01:22 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 23 09:01:22 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 23 09:01:22 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 23 09:01:22 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Closed udev Control Socket.
Jan 23 09:01:22 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Closed udev Kernel Socket.
Jan 23 09:01:22 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 23 09:01:22 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 23 09:01:22 localhost systemd[1]: Starting Cleanup udev Database...
Jan 23 09:01:22 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 23 09:01:22 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 23 09:01:22 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Stopped Create System Users.
Jan 23 09:01:22 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 23 09:01:22 localhost systemd[1]: Finished Cleanup udev Database.
Jan 23 09:01:22 localhost systemd[1]: Reached target Switch Root.
Jan 23 09:01:22 localhost systemd[1]: Starting Switch Root...
Jan 23 09:01:22 localhost systemd[1]: Switching root.
Jan 23 09:01:22 localhost systemd-journald[305]: Journal stopped
Jan 23 09:01:23 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Jan 23 09:01:23 localhost kernel: audit: type=1404 audit(1769158882.359:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 23 09:01:23 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 09:01:23 localhost kernel: SELinux:  policy capability open_perms=1
Jan 23 09:01:23 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 09:01:23 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 23 09:01:23 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 09:01:23 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 09:01:23 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 09:01:23 localhost kernel: audit: type=1403 audit(1769158882.515:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 23 09:01:23 localhost systemd[1]: Successfully loaded SELinux policy in 162.093ms.
Jan 23 09:01:23 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.052ms.
Jan 23 09:01:23 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 23 09:01:23 localhost systemd[1]: Detected virtualization kvm.
Jan 23 09:01:23 localhost systemd[1]: Detected architecture x86-64.
Jan 23 09:01:23 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:01:23 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 23 09:01:23 localhost systemd[1]: Stopped Switch Root.
Jan 23 09:01:23 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 23 09:01:23 localhost systemd[1]: Created slice Slice /system/getty.
Jan 23 09:01:23 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 23 09:01:23 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 23 09:01:23 localhost systemd[1]: Created slice User and Session Slice.
Jan 23 09:01:23 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 23 09:01:23 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 23 09:01:23 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 23 09:01:23 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 23 09:01:23 localhost systemd[1]: Stopped target Switch Root.
Jan 23 09:01:23 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 23 09:01:23 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 23 09:01:23 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 23 09:01:23 localhost systemd[1]: Reached target Path Units.
Jan 23 09:01:23 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 23 09:01:23 localhost systemd[1]: Reached target Slice Units.
Jan 23 09:01:23 localhost systemd[1]: Reached target Swaps.
Jan 23 09:01:23 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 23 09:01:23 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 23 09:01:23 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 23 09:01:23 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 23 09:01:23 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 23 09:01:23 localhost systemd[1]: Listening on udev Control Socket.
Jan 23 09:01:23 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 23 09:01:23 localhost systemd[1]: Mounting Huge Pages File System...
Jan 23 09:01:23 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 23 09:01:23 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 23 09:01:23 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 23 09:01:23 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 23 09:01:23 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 23 09:01:23 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 23 09:01:23 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 23 09:01:23 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 23 09:01:23 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 23 09:01:23 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 23 09:01:23 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 23 09:01:23 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 23 09:01:23 localhost systemd[1]: Stopped Journal Service.
Jan 23 09:01:23 localhost kernel: fuse: init (API version 7.37)
Jan 23 09:01:23 localhost systemd[1]: Starting Journal Service...
Jan 23 09:01:23 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 23 09:01:23 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 23 09:01:23 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 09:01:23 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 23 09:01:23 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 23 09:01:23 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 23 09:01:23 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 23 09:01:23 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 23 09:01:23 localhost systemd[1]: Mounted Huge Pages File System.
Jan 23 09:01:23 localhost systemd-journald[675]: Journal started
Jan 23 09:01:23 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 23 09:01:22 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 23 09:01:22 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 23 09:01:23 localhost systemd[1]: Started Journal Service.
Jan 23 09:01:23 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 23 09:01:23 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 23 09:01:23 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 23 09:01:23 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 23 09:01:23 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 09:01:23 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 23 09:01:23 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 23 09:01:23 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 23 09:01:23 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 23 09:01:23 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 23 09:01:23 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 23 09:01:23 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 23 09:01:23 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 23 09:01:23 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 23 09:01:23 localhost systemd[1]: Mounting FUSE Control File System...
Jan 23 09:01:23 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 23 09:01:23 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 23 09:01:23 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 23 09:01:23 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 23 09:01:23 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 23 09:01:23 localhost systemd[1]: Starting Create System Users...
Jan 23 09:01:23 localhost kernel: ACPI: bus type drm_connector registered
Jan 23 09:01:23 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 23 09:01:23 localhost systemd[1]: Mounted FUSE Control File System.
Jan 23 09:01:23 localhost systemd-journald[675]: Received client request to flush runtime journal.
Jan 23 09:01:23 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 23 09:01:23 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 23 09:01:23 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 23 09:01:23 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 23 09:01:23 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 23 09:01:23 localhost systemd[1]: Finished Create System Users.
Jan 23 09:01:23 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 23 09:01:23 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 23 09:01:23 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 23 09:01:23 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 23 09:01:23 localhost systemd[1]: Reached target Local File Systems.
Jan 23 09:01:23 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 23 09:01:23 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 23 09:01:23 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 23 09:01:23 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 23 09:01:23 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 23 09:01:23 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 23 09:01:23 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 23 09:01:23 localhost bootctl[693]: Couldn't find EFI system partition, skipping.
Jan 23 09:01:23 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 23 09:01:23 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 23 09:01:23 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 23 09:01:23 localhost systemd[1]: Starting Security Auditing Service...
Jan 23 09:01:23 localhost systemd[1]: Starting RPC Bind...
Jan 23 09:01:23 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 23 09:01:23 localhost auditd[699]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 23 09:01:23 localhost auditd[699]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 23 09:01:23 localhost systemd[1]: Started RPC Bind.
Jan 23 09:01:23 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 23 09:01:23 localhost augenrules[704]: /sbin/augenrules: No change
Jan 23 09:01:23 localhost augenrules[719]: No rules
Jan 23 09:01:23 localhost augenrules[719]: enabled 1
Jan 23 09:01:23 localhost augenrules[719]: failure 1
Jan 23 09:01:23 localhost augenrules[719]: pid 699
Jan 23 09:01:23 localhost augenrules[719]: rate_limit 0
Jan 23 09:01:23 localhost augenrules[719]: backlog_limit 8192
Jan 23 09:01:23 localhost augenrules[719]: lost 0
Jan 23 09:01:23 localhost augenrules[719]: backlog 0
Jan 23 09:01:23 localhost augenrules[719]: backlog_wait_time 60000
Jan 23 09:01:23 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 23 09:01:23 localhost augenrules[719]: enabled 1
Jan 23 09:01:23 localhost augenrules[719]: failure 1
Jan 23 09:01:23 localhost augenrules[719]: pid 699
Jan 23 09:01:23 localhost augenrules[719]: rate_limit 0
Jan 23 09:01:23 localhost augenrules[719]: backlog_limit 8192
Jan 23 09:01:23 localhost augenrules[719]: lost 0
Jan 23 09:01:23 localhost augenrules[719]: backlog 3
Jan 23 09:01:23 localhost augenrules[719]: backlog_wait_time 60000
Jan 23 09:01:23 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 23 09:01:23 localhost augenrules[719]: enabled 1
Jan 23 09:01:23 localhost augenrules[719]: failure 1
Jan 23 09:01:23 localhost augenrules[719]: pid 699
Jan 23 09:01:23 localhost augenrules[719]: rate_limit 0
Jan 23 09:01:23 localhost augenrules[719]: backlog_limit 8192
Jan 23 09:01:23 localhost augenrules[719]: lost 0
Jan 23 09:01:23 localhost augenrules[719]: backlog 2
Jan 23 09:01:23 localhost augenrules[719]: backlog_wait_time 60000
Jan 23 09:01:23 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 23 09:01:23 localhost systemd[1]: Started Security Auditing Service.
Jan 23 09:01:23 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 23 09:01:23 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 23 09:01:23 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 23 09:01:23 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 23 09:01:23 localhost systemd[1]: Starting Update is Completed...
Jan 23 09:01:23 localhost systemd[1]: Finished Update is Completed.
Jan 23 09:01:23 localhost systemd-udevd[727]: Using default interface naming scheme 'rhel-9.0'.
Jan 23 09:01:23 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 23 09:01:23 localhost systemd[1]: Reached target System Initialization.
Jan 23 09:01:23 localhost systemd[1]: Started dnf makecache --timer.
Jan 23 09:01:23 localhost systemd[1]: Started Daily rotation of log files.
Jan 23 09:01:23 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 23 09:01:23 localhost systemd[1]: Reached target Timer Units.
Jan 23 09:01:23 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 23 09:01:23 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 23 09:01:23 localhost systemd[1]: Reached target Socket Units.
Jan 23 09:01:23 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 23 09:01:23 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 09:01:23 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 23 09:01:23 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 23 09:01:23 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 09:01:23 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 23 09:01:23 localhost systemd-udevd[754]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 09:01:23 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 23 09:01:23 localhost systemd[1]: Reached target Basic System.
Jan 23 09:01:23 localhost dbus-broker-lau[765]: Ready
Jan 23 09:01:23 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 23 09:01:23 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 23 09:01:23 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 23 09:01:23 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 23 09:01:23 localhost systemd[1]: Starting NTP client/server...
Jan 23 09:01:23 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 23 09:01:23 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 23 09:01:23 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 23 09:01:23 localhost systemd[1]: Started irqbalance daemon.
Jan 23 09:01:23 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 23 09:01:23 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 09:01:23 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 09:01:23 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 09:01:23 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 23 09:01:23 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 23 09:01:23 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 23 09:01:23 localhost systemd[1]: Starting User Login Management...
Jan 23 09:01:23 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 23 09:01:23 localhost chronyd[791]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 23 09:01:23 localhost chronyd[791]: Loaded 0 symmetric keys
Jan 23 09:01:23 localhost chronyd[791]: Using right/UTC timezone to obtain leap second data
Jan 23 09:01:23 localhost chronyd[791]: Loaded seccomp filter (level 2)
Jan 23 09:01:23 localhost systemd[1]: Started NTP client/server.
Jan 23 09:01:23 localhost systemd-logind[784]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 23 09:01:23 localhost systemd-logind[784]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 23 09:01:23 localhost systemd-logind[784]: New seat seat0.
Jan 23 09:01:23 localhost systemd[1]: Started User Login Management.
Jan 23 09:01:23 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 23 09:01:23 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 23 09:01:24 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 23 09:01:24 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 23 09:01:24 localhost kernel: Console: switching to colour dummy device 80x25
Jan 23 09:01:24 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 23 09:01:24 localhost kernel: [drm] features: -context_init
Jan 23 09:01:24 localhost kernel: [drm] number of scanouts: 1
Jan 23 09:01:24 localhost kernel: [drm] number of cap sets: 0
Jan 23 09:01:24 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 23 09:01:24 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 23 09:01:24 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 23 09:01:24 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 23 09:01:24 localhost kernel: kvm_amd: TSC scaling supported
Jan 23 09:01:24 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 23 09:01:24 localhost kernel: kvm_amd: Nested Paging enabled
Jan 23 09:01:24 localhost kernel: kvm_amd: LBR virtualization supported
Jan 23 09:01:24 localhost iptables.init[778]: iptables: Applying firewall rules: [  OK  ]
Jan 23 09:01:24 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 23 09:01:24 localhost cloud-init[836]: Cloud-init v. 24.4-8.el9 running 'init-local' at Fri, 23 Jan 2026 09:01:24 +0000. Up 5.87 seconds.
Jan 23 09:01:24 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 23 09:01:24 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 23 09:01:24 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpoh5g_57h.mount: Deactivated successfully.
Jan 23 09:01:24 localhost systemd[1]: Starting Hostname Service...
Jan 23 09:01:24 localhost systemd[1]: Started Hostname Service.
Jan 23 09:01:24 np0005593293.novalocal systemd-hostnamed[851]: Hostname set to <np0005593293.novalocal> (static)
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Reached target Preparation for Network.
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Starting Network Manager...
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.7794] NetworkManager (version 1.54.3-2.el9) is starting... (boot:32ce9a2a-527f-4400-a04f-d4b7f74a7a70)
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.7799] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.7866] manager[0x55613a0fa000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.7899] hostname: hostname: using hostnamed
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.7899] hostname: static hostname changed from (none) to "np0005593293.novalocal"
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.7903] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8005] manager[0x55613a0fa000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8014] manager[0x55613a0fa000]: rfkill: WWAN hardware radio set enabled
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8077] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8078] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8078] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8079] manager: Networking is enabled by state file
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8082] settings: Loaded settings plugin: keyfile (internal)
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8091] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8110] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8120] dhcp: init: Using DHCP client 'internal'
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8123] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8137] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8144] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8152] device (lo): Activation: starting connection 'lo' (0e3dd286-7fba-41e7-8d0b-2929e29deeb1)
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8162] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8167] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8195] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8199] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8202] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8204] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8206] device (eth0): carrier: link connected
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8209] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8217] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8222] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8226] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8227] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8229] manager: NetworkManager state is now CONNECTING
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8230] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8240] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8243] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Started Network Manager.
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Reached target Network.
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8466] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8471] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 09:01:24 np0005593293.novalocal NetworkManager[855]: <info>  [1769158884.8478] device (lo): Activation: successful, device activated.
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Reached target NFS client services.
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: Reached target Remote File Systems.
Jan 23 09:01:24 np0005593293.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 09:01:28 np0005593293.novalocal NetworkManager[855]: <info>  [1769158888.2411] dhcp4 (eth0): state changed new lease, address=38.129.56.206
Jan 23 09:01:28 np0005593293.novalocal NetworkManager[855]: <info>  [1769158888.2433] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 09:01:28 np0005593293.novalocal NetworkManager[855]: <info>  [1769158888.2473] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:01:28 np0005593293.novalocal NetworkManager[855]: <info>  [1769158888.2506] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:01:28 np0005593293.novalocal NetworkManager[855]: <info>  [1769158888.2509] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:01:28 np0005593293.novalocal NetworkManager[855]: <info>  [1769158888.2513] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 09:01:28 np0005593293.novalocal NetworkManager[855]: <info>  [1769158888.2515] device (eth0): Activation: successful, device activated.
Jan 23 09:01:28 np0005593293.novalocal NetworkManager[855]: <info>  [1769158888.2520] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 09:01:28 np0005593293.novalocal NetworkManager[855]: <info>  [1769158888.2523] manager: startup complete
Jan 23 09:01:28 np0005593293.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 23 09:01:28 np0005593293.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: Cloud-init v. 24.4-8.el9 running 'init' at Fri, 23 Jan 2026 09:01:28 +0000. Up 10.17 seconds.
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |  eth0  | True |        38.129.56.206         | 255.255.255.0 | global | fa:16:3e:01:51:4d |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe01:514d/64 |       .       |  link  | fa:16:3e:01:51:4d |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Jan 23 09:01:28 np0005593293.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 09:01:29 np0005593293.novalocal useradd[986]: new group: name=cloud-user, GID=1001
Jan 23 09:01:29 np0005593293.novalocal useradd[986]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 23 09:01:29 np0005593293.novalocal useradd[986]: add 'cloud-user' to group 'adm'
Jan 23 09:01:29 np0005593293.novalocal useradd[986]: add 'cloud-user' to group 'systemd-journal'
Jan 23 09:01:29 np0005593293.novalocal useradd[986]: add 'cloud-user' to shadow group 'adm'
Jan 23 09:01:29 np0005593293.novalocal useradd[986]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: Generating public/private rsa key pair.
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: The key fingerprint is:
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: SHA256:Zv5Nklgu5Px7Z2L6MoVKKy0OyX+BRyE7O8mBGzaGMew root@np0005593293.novalocal
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: The key's randomart image is:
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: +---[RSA 3072]----+
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: | .               |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |  +   . .        |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: | . + . o .       |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |  E * + .        |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |   o = BS ..     |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |   ...*B++...    |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |    +  =*++..    |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |     oo =+o++ o  |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |     .o+  =Oo+   |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: Generating public/private ecdsa key pair.
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: The key fingerprint is:
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: SHA256:miLeFBmA+MsGxU8XSCXX2tzFYSlAjlwKVaJr1qwMkok root@np0005593293.novalocal
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: The key's randomart image is:
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: +---[ECDSA 256]---+
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |.o..++*==. .oo   |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |o o.o*.*. ..+    |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: | o oo.++.. o     |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |oo. .*. o .      |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |Eo..* o S        |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: | .+= o o         |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: | .. = o          |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: | . + .           |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |  . .            |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: Generating public/private ed25519 key pair.
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: The key fingerprint is:
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: SHA256:b0o3optgTJFv6mmeQoOvX31kKYaaUgPCCmDUcQEZQds root@np0005593293.novalocal
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: The key's randomart image is:
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: +--[ED25519 256]--+
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |ooo**o.          |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |+  o+.           |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |+. .oE           |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |+.   +   .       |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |..o o = S        |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |..o* = + .       |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |.oo.* . + =      |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: | .o+.+ = = .     |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: |.o.+= +..        |
Jan 23 09:01:30 np0005593293.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Reached target Network is Online.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Starting System Logging Service...
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 23 09:01:30 np0005593293.novalocal sm-notify[1002]: Version 2.5.4 starting
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Starting Permit User Sessions...
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Finished Permit User Sessions.
Jan 23 09:01:30 np0005593293.novalocal sshd[1004]: Server listening on 0.0.0.0 port 22.
Jan 23 09:01:30 np0005593293.novalocal sshd[1004]: Server listening on :: port 22.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Started Command Scheduler.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Started Getty on tty1.
Jan 23 09:01:30 np0005593293.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Jan 23 09:01:30 np0005593293.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 23 09:01:30 np0005593293.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 50% if used.)
Jan 23 09:01:30 np0005593293.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Reached target Login Prompts.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Started System Logging Service.
Jan 23 09:01:30 np0005593293.novalocal rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Jan 23 09:01:30 np0005593293.novalocal rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Reached target Multi-User System.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 23 09:01:30 np0005593293.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 09:01:30 np0005593293.novalocal kdumpctl[1013]: kdump: No kdump initial ramdisk found.
Jan 23 09:01:30 np0005593293.novalocal kdumpctl[1013]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 23 09:01:30 np0005593293.novalocal sshd-session[1142]: Connection reset by 38.102.83.114 port 39370 [preauth]
Jan 23 09:01:30 np0005593293.novalocal cloud-init[1147]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Fri, 23 Jan 2026 09:01:30 +0000. Up 12.16 seconds.
Jan 23 09:01:30 np0005593293.novalocal sshd-session[1153]: Unable to negotiate with 38.102.83.114 port 50944: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 23 09:01:30 np0005593293.novalocal sshd-session[1161]: Connection closed by 38.102.83.114 port 50948 [preauth]
Jan 23 09:01:30 np0005593293.novalocal sshd-session[1173]: Unable to negotiate with 38.102.83.114 port 50956: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 23 09:01:30 np0005593293.novalocal sshd-session[1186]: Unable to negotiate with 38.102.83.114 port 50970: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 23 09:01:30 np0005593293.novalocal sshd-session[1194]: Connection reset by 38.102.83.114 port 50972 [preauth]
Jan 23 09:01:30 np0005593293.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 23 09:01:30 np0005593293.novalocal sshd-session[1214]: Connection reset by 38.102.83.114 port 50984 [preauth]
Jan 23 09:01:30 np0005593293.novalocal sshd-session[1226]: Unable to negotiate with 38.102.83.114 port 50986: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 23 09:01:30 np0005593293.novalocal sshd-session[1233]: Unable to negotiate with 38.102.83.114 port 50992: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 23 09:01:30 np0005593293.novalocal dracut[1282]: dracut-057-102.git20250818.el9
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 23 09:01:31 np0005593293.novalocal cloud-init[1310]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Fri, 23 Jan 2026 09:01:30 +0000. Up 12.56 seconds.
Jan 23 09:01:31 np0005593293.novalocal cloud-init[1343]: #############################################################
Jan 23 09:01:31 np0005593293.novalocal cloud-init[1348]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 23 09:01:31 np0005593293.novalocal cloud-init[1356]: 256 SHA256:miLeFBmA+MsGxU8XSCXX2tzFYSlAjlwKVaJr1qwMkok root@np0005593293.novalocal (ECDSA)
Jan 23 09:01:31 np0005593293.novalocal cloud-init[1359]: 256 SHA256:b0o3optgTJFv6mmeQoOvX31kKYaaUgPCCmDUcQEZQds root@np0005593293.novalocal (ED25519)
Jan 23 09:01:31 np0005593293.novalocal cloud-init[1361]: 3072 SHA256:Zv5Nklgu5Px7Z2L6MoVKKy0OyX+BRyE7O8mBGzaGMew root@np0005593293.novalocal (RSA)
Jan 23 09:01:31 np0005593293.novalocal cloud-init[1362]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 23 09:01:31 np0005593293.novalocal cloud-init[1363]: #############################################################
Jan 23 09:01:31 np0005593293.novalocal cloud-init[1310]: Cloud-init v. 24.4-8.el9 finished at Fri, 23 Jan 2026 09:01:31 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.75 seconds
Jan 23 09:01:31 np0005593293.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 23 09:01:31 np0005593293.novalocal systemd[1]: Reached target Cloud-init target.
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 23 09:01:31 np0005593293.novalocal dracut[1284]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: memstrack is not available
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: memstrack is not available
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: *** Including module: systemd ***
Jan 23 09:01:32 np0005593293.novalocal dracut[1284]: *** Including module: fips ***
Jan 23 09:01:33 np0005593293.novalocal chronyd[791]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Jan 23 09:01:33 np0005593293.novalocal chronyd[791]: System clock TAI offset set to 37 seconds
Jan 23 09:01:33 np0005593293.novalocal dracut[1284]: *** Including module: systemd-initrd ***
Jan 23 09:01:33 np0005593293.novalocal dracut[1284]: *** Including module: i18n ***
Jan 23 09:01:33 np0005593293.novalocal dracut[1284]: *** Including module: drm ***
Jan 23 09:01:33 np0005593293.novalocal dracut[1284]: *** Including module: prefixdevname ***
Jan 23 09:01:33 np0005593293.novalocal dracut[1284]: *** Including module: kernel-modules ***
Jan 23 09:01:33 np0005593293.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]: *** Including module: kernel-modules-extra ***
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]: *** Including module: qemu ***
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: IRQ 35 affinity is now unmanaged
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: IRQ 33 affinity is now unmanaged
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: IRQ 31 affinity is now unmanaged
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: IRQ 28 affinity is now unmanaged
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: IRQ 34 affinity is now unmanaged
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: IRQ 32 affinity is now unmanaged
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: IRQ 30 affinity is now unmanaged
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 23 09:01:34 np0005593293.novalocal irqbalance[779]: IRQ 29 affinity is now unmanaged
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]: *** Including module: fstab-sys ***
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]: *** Including module: rootfs-block ***
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]: *** Including module: terminfo ***
Jan 23 09:01:34 np0005593293.novalocal dracut[1284]: *** Including module: udev-rules ***
Jan 23 09:01:35 np0005593293.novalocal dracut[1284]: Skipping udev rule: 91-permissions.rules
Jan 23 09:01:35 np0005593293.novalocal dracut[1284]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 23 09:01:35 np0005593293.novalocal dracut[1284]: *** Including module: virtiofs ***
Jan 23 09:01:35 np0005593293.novalocal dracut[1284]: *** Including module: dracut-systemd ***
Jan 23 09:01:35 np0005593293.novalocal dracut[1284]: *** Including module: usrmount ***
Jan 23 09:01:35 np0005593293.novalocal dracut[1284]: *** Including module: base ***
Jan 23 09:01:35 np0005593293.novalocal dracut[1284]: *** Including module: fs-lib ***
Jan 23 09:01:35 np0005593293.novalocal dracut[1284]: *** Including module: kdumpbase ***
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:   microcode_ctl module: mangling fw_dir
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]: *** Including module: openssl ***
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]: *** Including module: shutdown ***
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]: *** Including module: squash ***
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]: *** Including modules done ***
Jan 23 09:01:36 np0005593293.novalocal dracut[1284]: *** Installing kernel module dependencies ***
Jan 23 09:01:37 np0005593293.novalocal dracut[1284]: *** Installing kernel module dependencies done ***
Jan 23 09:01:37 np0005593293.novalocal dracut[1284]: *** Resolving executable dependencies ***
Jan 23 09:01:38 np0005593293.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 09:01:39 np0005593293.novalocal dracut[1284]: *** Resolving executable dependencies done ***
Jan 23 09:01:39 np0005593293.novalocal dracut[1284]: *** Generating early-microcode cpio image ***
Jan 23 09:01:39 np0005593293.novalocal dracut[1284]: *** Store current command line parameters ***
Jan 23 09:01:39 np0005593293.novalocal dracut[1284]: Stored kernel commandline:
Jan 23 09:01:39 np0005593293.novalocal dracut[1284]: No dracut internal kernel commandline stored in the initramfs
Jan 23 09:01:39 np0005593293.novalocal dracut[1284]: *** Install squash loader ***
Jan 23 09:01:40 np0005593293.novalocal dracut[1284]: *** Squashing the files inside the initramfs ***
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: *** Squashing the files inside the initramfs done ***
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: *** Hardlinking files ***
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: Mode:           real
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: Files:          50
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: Linked:         0 files
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: Compared:       0 xattrs
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: Compared:       0 files
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: Saved:          0 B
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: Duration:       0.000595 seconds
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: *** Hardlinking files done ***
Jan 23 09:01:41 np0005593293.novalocal dracut[1284]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 23 09:01:42 np0005593293.novalocal kdumpctl[1013]: kdump: kexec: loaded kdump kernel
Jan 23 09:01:42 np0005593293.novalocal kdumpctl[1013]: kdump: Starting kdump: [OK]
Jan 23 09:01:42 np0005593293.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 23 09:01:42 np0005593293.novalocal systemd[1]: Startup finished in 1.498s (kernel) + 2.447s (initrd) + 19.767s (userspace) = 23.714s.
Jan 23 09:01:44 np0005593293.novalocal sshd-session[4300]: Accepted publickey for zuul from 38.102.83.114 port 38800 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 23 09:01:44 np0005593293.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 23 09:01:44 np0005593293.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 23 09:01:44 np0005593293.novalocal systemd-logind[784]: New session 1 of user zuul.
Jan 23 09:01:44 np0005593293.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 23 09:01:44 np0005593293.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Queued start job for default target Main User Target.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Created slice User Application Slice.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Reached target Paths.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Reached target Timers.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Starting D-Bus User Message Bus Socket...
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Starting Create User's Volatile Files and Directories...
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Finished Create User's Volatile Files and Directories.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Listening on D-Bus User Message Bus Socket.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Reached target Sockets.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Reached target Basic System.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Reached target Main User Target.
Jan 23 09:01:44 np0005593293.novalocal systemd[4304]: Startup finished in 117ms.
Jan 23 09:01:44 np0005593293.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 23 09:01:44 np0005593293.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 23 09:01:44 np0005593293.novalocal sshd-session[4300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:01:45 np0005593293.novalocal python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:01:48 np0005593293.novalocal python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:01:54 np0005593293.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 09:01:55 np0005593293.novalocal python3[4474]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:01:56 np0005593293.novalocal python3[4514]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 23 09:01:58 np0005593293.novalocal python3[4540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChWBsfs5FtlYIS47KhLNXtsYVhP6UT/w4WYq1l1d/b7+cXPAwAb4Qt1cc/BmNcKM419a6D+CvPejxC67s0h4ksuceBjB/s6b88/zjf8Lio8Dd87f6J+f6IY8ByYIQ8s3Hvn6z0K7HSyEMuQ0B/CLxeBW4MJFqcoLK2v7Y8SNPGLr8w/8y79OWnJJPKmfM4ACTo2JwqmPGI/4+LQsCZS/p/yKDTO5AYxsIUwWw/IX3Jxs67UOBqa40onmgM/VRkfGY512fziVUNkmFHG2Aqgosbpbz/XysrVTpvLRA/H2zpGbbTbuEg6xp8vHQO5V0csAd6p3cdOixjdaPmf9oy3+yXuIeWwnnxPHqvVDY6N9aaIX4vuajxOoMUFiQ2YtcDq7sCn8HoateyYgIL/u2+pInArUiYGemyMEWja0DhD6UdCkY0Ea+YDWeIZKM505N+HClR5jfjjVW35TndY+AldV5OhOzMRmPjtJYS8a0usUXRvmxRfMFSmO9CI1RfNmod9X0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:01:59 np0005593293.novalocal python3[4564]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:01:59 np0005593293.novalocal python3[4663]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:02:00 np0005593293.novalocal python3[4734]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769158919.4817524-251-173879044929477/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=79d1f7c5e92f4d57bb17665cf28be8d8_id_rsa follow=False checksum=70fc72f3adde7c23bd22f0e2ad4ebdd2e15c011a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:00 np0005593293.novalocal python3[4857]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:02:01 np0005593293.novalocal python3[4928]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769158920.5157917-306-12176675905881/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=79d1f7c5e92f4d57bb17665cf28be8d8_id_rsa.pub follow=False checksum=1817e5216c13f90f69486a375706d090e99f2d79 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:02 np0005593293.novalocal python3[4976]: ansible-ping Invoked with data=pong
Jan 23 09:02:03 np0005593293.novalocal python3[5000]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:02:05 np0005593293.novalocal python3[5058]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 23 09:02:07 np0005593293.novalocal python3[5090]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:07 np0005593293.novalocal python3[5114]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:07 np0005593293.novalocal python3[5138]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:07 np0005593293.novalocal python3[5162]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:08 np0005593293.novalocal python3[5186]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:08 np0005593293.novalocal python3[5210]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:10 np0005593293.novalocal sudo[5234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zihsfbsrxhqqyzgkohskblfatmprpjsi ; /usr/bin/python3'
Jan 23 09:02:10 np0005593293.novalocal sudo[5234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:10 np0005593293.novalocal python3[5236]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:10 np0005593293.novalocal sudo[5234]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:10 np0005593293.novalocal sudo[5312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcdraiwxpcfnahxsfgeowaotkqhbpmsz ; /usr/bin/python3'
Jan 23 09:02:10 np0005593293.novalocal sudo[5312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:10 np0005593293.novalocal python3[5314]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:02:10 np0005593293.novalocal sudo[5312]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:11 np0005593293.novalocal sudo[5385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcaojjvugmmqcflinpxspugrmsttanok ; /usr/bin/python3'
Jan 23 09:02:11 np0005593293.novalocal sudo[5385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:11 np0005593293.novalocal python3[5387]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158930.4571152-31-11921095649388/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:11 np0005593293.novalocal sudo[5385]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:12 np0005593293.novalocal python3[5435]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:12 np0005593293.novalocal python3[5459]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:12 np0005593293.novalocal python3[5483]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:13 np0005593293.novalocal python3[5507]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:13 np0005593293.novalocal python3[5531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:13 np0005593293.novalocal python3[5555]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:13 np0005593293.novalocal python3[5579]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:14 np0005593293.novalocal python3[5603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:14 np0005593293.novalocal python3[5627]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:14 np0005593293.novalocal python3[5651]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:15 np0005593293.novalocal python3[5675]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:15 np0005593293.novalocal python3[5699]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:15 np0005593293.novalocal python3[5723]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:15 np0005593293.novalocal python3[5747]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:16 np0005593293.novalocal python3[5771]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:16 np0005593293.novalocal python3[5795]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:17 np0005593293.novalocal python3[5819]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:17 np0005593293.novalocal python3[5843]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:17 np0005593293.novalocal python3[5867]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:17 np0005593293.novalocal python3[5891]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:18 np0005593293.novalocal python3[5915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:18 np0005593293.novalocal python3[5939]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:18 np0005593293.novalocal python3[5963]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:18 np0005593293.novalocal python3[5987]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:19 np0005593293.novalocal python3[6011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:19 np0005593293.novalocal python3[6035]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:02:21 np0005593293.novalocal sudo[6059]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmnphhedtbrcsyfglpfxkuguwndmsjkg ; /usr/bin/python3'
Jan 23 09:02:21 np0005593293.novalocal sudo[6059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:21 np0005593293.novalocal python3[6061]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 09:02:21 np0005593293.novalocal systemd[1]: Starting Time & Date Service...
Jan 23 09:02:22 np0005593293.novalocal systemd[1]: Started Time & Date Service.
Jan 23 09:02:22 np0005593293.novalocal systemd-timedated[6063]: Changed time zone to 'UTC' (UTC).
Jan 23 09:02:22 np0005593293.novalocal sudo[6059]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:22 np0005593293.novalocal sudo[6090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pibijdefdpysgyhxlsqnnzofwukrcefq ; /usr/bin/python3'
Jan 23 09:02:22 np0005593293.novalocal sudo[6090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:22 np0005593293.novalocal python3[6092]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:22 np0005593293.novalocal sudo[6090]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:22 np0005593293.novalocal python3[6168]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:02:23 np0005593293.novalocal python3[6239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769158942.654835-251-170396581334155/source _original_basename=tmpeay2uzd1 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:23 np0005593293.novalocal python3[6339]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:02:24 np0005593293.novalocal python3[6410]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769158943.5449896-301-116881696317325/source _original_basename=tmph_iku2ub follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:24 np0005593293.novalocal sudo[6510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubwvnsuewscabbglagogzhvtepdvmhmw ; /usr/bin/python3'
Jan 23 09:02:24 np0005593293.novalocal sudo[6510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:25 np0005593293.novalocal python3[6512]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:02:25 np0005593293.novalocal sudo[6510]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:25 np0005593293.novalocal sudo[6583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtutmphcuiyxkamvxubgpdqofkpczqsp ; /usr/bin/python3'
Jan 23 09:02:25 np0005593293.novalocal sudo[6583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:25 np0005593293.novalocal python3[6585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769158944.7390676-381-143356496831974/source _original_basename=tmpuoi508px follow=False checksum=ea64940936b03df732f2448cb0a820d57d2e54a6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:25 np0005593293.novalocal sudo[6583]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:26 np0005593293.novalocal python3[6633]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:02:26 np0005593293.novalocal python3[6659]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:02:27 np0005593293.novalocal sudo[6737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scdpmvduxeaihkdyyamtbvhtjkftdhdq ; /usr/bin/python3'
Jan 23 09:02:27 np0005593293.novalocal sudo[6737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:27 np0005593293.novalocal python3[6739]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:02:27 np0005593293.novalocal sudo[6737]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:27 np0005593293.novalocal sudo[6810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajsyplghyqwlwckjwqwmtajfohskvmap ; /usr/bin/python3'
Jan 23 09:02:27 np0005593293.novalocal sudo[6810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:28 np0005593293.novalocal python3[6812]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158947.3322117-451-95075996926911/source _original_basename=tmpn6lswhf0 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:28 np0005593293.novalocal sudo[6810]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:28 np0005593293.novalocal sudo[6861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpneclwafxixcfubxbwjpieuokhooduq ; /usr/bin/python3'
Jan 23 09:02:28 np0005593293.novalocal sudo[6861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:28 np0005593293.novalocal python3[6863]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-639e-86bd-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:02:28 np0005593293.novalocal sudo[6861]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:29 np0005593293.novalocal python3[6891]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-639e-86bd-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 23 09:02:30 np0005593293.novalocal python3[6919]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:48 np0005593293.novalocal sudo[6943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpvknfcwzvywgtzqgntwoxswuwykzmyr ; /usr/bin/python3'
Jan 23 09:02:48 np0005593293.novalocal sudo[6943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:02:48 np0005593293.novalocal python3[6945]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:02:48 np0005593293.novalocal sudo[6943]: pam_unix(sudo:session): session closed for user root
Jan 23 09:02:52 np0005593293.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 09:03:30 np0005593293.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 23 09:03:30 np0005593293.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 23 09:03:30 np0005593293.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 23 09:03:30 np0005593293.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 23 09:03:30 np0005593293.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 23 09:03:30 np0005593293.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 23 09:03:30 np0005593293.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 23 09:03:30 np0005593293.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 23 09:03:30 np0005593293.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 23 09:03:30 np0005593293.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.4780] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 09:03:30 np0005593293.novalocal systemd-udevd[6948]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.4982] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.5018] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.5024] device (eth1): carrier: link connected
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.5027] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.5035] policy: auto-activating connection 'Wired connection 1' (80f800fd-9bd3-3b41-8339-5a455d46d8c5)
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.5040] device (eth1): Activation: starting connection 'Wired connection 1' (80f800fd-9bd3-3b41-8339-5a455d46d8c5)
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.5041] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.5044] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.5049] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:03:30 np0005593293.novalocal NetworkManager[855]: <info>  [1769159010.5055] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 09:03:31 np0005593293.novalocal python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-4543-3693-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:03:41 np0005593293.novalocal sudo[7054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pptiatqrsamrdtjnyplvtmplbscnosnw ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 09:03:41 np0005593293.novalocal sudo[7054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:03:41 np0005593293.novalocal python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:03:41 np0005593293.novalocal sudo[7054]: pam_unix(sudo:session): session closed for user root
Jan 23 09:03:41 np0005593293.novalocal sudo[7127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdxoyhpidxkgdlcluvkaigzohborvtwa ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 09:03:41 np0005593293.novalocal sudo[7127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:03:42 np0005593293.novalocal python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769159021.3305278-104-94129406854838/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=2eff1c515c815bdb72f89804dc252cf6b4af17ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:03:42 np0005593293.novalocal sudo[7127]: pam_unix(sudo:session): session closed for user root
Jan 23 09:03:42 np0005593293.novalocal sudo[7177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpybivpwxddowqekhjbufukghrxhghei ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 09:03:42 np0005593293.novalocal sudo[7177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:03:42 np0005593293.novalocal python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 09:03:42 np0005593293.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 23 09:03:42 np0005593293.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 23 09:03:42 np0005593293.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[855]: <info>  [1769159022.9052] caught SIGTERM, shutting down normally.
Jan 23 09:03:42 np0005593293.novalocal systemd[1]: Stopping Network Manager...
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[855]: <info>  [1769159022.9066] dhcp4 (eth0): canceled DHCP transaction
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[855]: <info>  [1769159022.9067] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[855]: <info>  [1769159022.9067] dhcp4 (eth0): state changed no lease
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[855]: <info>  [1769159022.9072] manager: NetworkManager state is now CONNECTING
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[855]: <info>  [1769159022.9194] dhcp4 (eth1): canceled DHCP transaction
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[855]: <info>  [1769159022.9195] dhcp4 (eth1): state changed no lease
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[855]: <info>  [1769159022.9279] exiting (success)
Jan 23 09:03:42 np0005593293.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 09:03:42 np0005593293.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 09:03:42 np0005593293.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 23 09:03:42 np0005593293.novalocal systemd[1]: Stopped Network Manager.
Jan 23 09:03:42 np0005593293.novalocal systemd[1]: Starting Network Manager...
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159022.9907] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:32ce9a2a-527f-4400-a04f-d4b7f74a7a70)
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159022.9910] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 09:03:42 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159022.9979] manager[0x55be258b0000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 09:03:43 np0005593293.novalocal systemd[1]: Starting Hostname Service...
Jan 23 09:03:43 np0005593293.novalocal systemd[1]: Started Hostname Service.
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1077] hostname: hostname: using hostnamed
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1080] hostname: static hostname changed from (none) to "np0005593293.novalocal"
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1084] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1089] manager[0x55be258b0000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1089] manager[0x55be258b0000]: rfkill: WWAN hardware radio set enabled
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1114] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1115] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1115] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1116] manager: Networking is enabled by state file
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1118] settings: Loaded settings plugin: keyfile (internal)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1121] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1147] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1156] dhcp: init: Using DHCP client 'internal'
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1158] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1167] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1172] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1179] device (lo): Activation: starting connection 'lo' (0e3dd286-7fba-41e7-8d0b-2929e29deeb1)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1185] device (eth0): carrier: link connected
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1188] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1192] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1192] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1198] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1210] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1216] device (eth1): carrier: link connected
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1219] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1223] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (80f800fd-9bd3-3b41-8339-5a455d46d8c5) (indicated)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1223] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1227] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1240] device (eth1): Activation: starting connection 'Wired connection 1' (80f800fd-9bd3-3b41-8339-5a455d46d8c5)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1246] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 09:03:43 np0005593293.novalocal systemd[1]: Started Network Manager.
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1257] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1260] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1263] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1266] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1270] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1272] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1276] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1280] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1287] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1292] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1301] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1304] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1324] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1329] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1335] device (lo): Activation: successful, device activated.
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1341] dhcp4 (eth0): state changed new lease, address=38.129.56.206
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1346] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1417] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1460] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1461] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1463] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1465] device (eth0): Activation: successful, device activated.
Jan 23 09:03:43 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159023.1470] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 09:03:43 np0005593293.novalocal sudo[7177]: pam_unix(sudo:session): session closed for user root
Jan 23 09:03:43 np0005593293.novalocal python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-4543-3693-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:03:53 np0005593293.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 09:04:13 np0005593293.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 09:04:27 np0005593293.novalocal systemd[4304]: Starting Mark boot as successful...
Jan 23 09:04:27 np0005593293.novalocal systemd[4304]: Finished Mark boot as successful.
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4106] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 09:04:28 np0005593293.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 09:04:28 np0005593293.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4458] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4459] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4465] device (eth1): Activation: successful, device activated.
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4473] manager: startup complete
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4477] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <warn>  [1769159068.4482] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4490] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 23 09:04:28 np0005593293.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4564] dhcp4 (eth1): canceled DHCP transaction
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4565] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4565] dhcp4 (eth1): state changed no lease
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4576] policy: auto-activating connection 'ci-private-network' (568f73b4-88ba-5ba3-8eca-ff7d1807a044)
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4581] device (eth1): Activation: starting connection 'ci-private-network' (568f73b4-88ba-5ba3-8eca-ff7d1807a044)
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4581] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4584] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4590] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4599] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4640] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4647] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:04:28 np0005593293.novalocal NetworkManager[7189]: <info>  [1769159068.4658] device (eth1): Activation: successful, device activated.
Jan 23 09:04:38 np0005593293.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 09:04:43 np0005593293.novalocal sshd-session[4313]: Received disconnect from 38.102.83.114 port 38800:11: disconnected by user
Jan 23 09:04:43 np0005593293.novalocal sshd-session[4313]: Disconnected from user zuul 38.102.83.114 port 38800
Jan 23 09:04:43 np0005593293.novalocal sshd-session[4300]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:04:43 np0005593293.novalocal systemd-logind[784]: Session 1 logged out. Waiting for processes to exit.
Jan 23 09:05:52 np0005593293.novalocal sshd-session[7292]: Accepted publickey for zuul from 38.102.83.114 port 39246 ssh2: RSA SHA256:/TrmfiPCpRhp7iDH6L+XY56Icv2RRStSYrCVh8OnXTQ
Jan 23 09:05:52 np0005593293.novalocal systemd-logind[784]: New session 3 of user zuul.
Jan 23 09:05:52 np0005593293.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 23 09:05:52 np0005593293.novalocal sshd-session[7292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:05:52 np0005593293.novalocal sudo[7371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjeqdhjrvwgbtpquntcfefwytsnqbako ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 09:05:52 np0005593293.novalocal sudo[7371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:05:52 np0005593293.novalocal python3[7373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:05:52 np0005593293.novalocal sudo[7371]: pam_unix(sudo:session): session closed for user root
Jan 23 09:05:52 np0005593293.novalocal sudo[7444]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzaopzuegsugalnfnwmsmkqbnqbddsbs ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 23 09:05:52 np0005593293.novalocal sudo[7444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:05:53 np0005593293.novalocal python3[7446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159152.4485953-373-60220237497568/source _original_basename=tmpoch6qqu9 follow=False checksum=6e1e8970cf6ad2f0b1a32d462d71e8a0528ec2d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:05:53 np0005593293.novalocal sudo[7444]: pam_unix(sudo:session): session closed for user root
Jan 23 09:05:57 np0005593293.novalocal sshd-session[7295]: Connection closed by 38.102.83.114 port 39246
Jan 23 09:05:57 np0005593293.novalocal sshd-session[7292]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:05:57 np0005593293.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 23 09:05:57 np0005593293.novalocal systemd-logind[784]: Session 3 logged out. Waiting for processes to exit.
Jan 23 09:05:57 np0005593293.novalocal systemd-logind[784]: Removed session 3.
Jan 23 09:07:27 np0005593293.novalocal systemd[4304]: Created slice User Background Tasks Slice.
Jan 23 09:07:27 np0005593293.novalocal systemd[4304]: Starting Cleanup of User's Temporary Files and Directories...
Jan 23 09:07:27 np0005593293.novalocal systemd[4304]: Finished Cleanup of User's Temporary Files and Directories.
Jan 23 09:14:49 np0005593293.novalocal systemd[1]: Starting dnf makecache...
Jan 23 09:14:49 np0005593293.novalocal sshd-session[7477]: Accepted publickey for zuul from 38.102.83.114 port 57028 ssh2: RSA SHA256:/TrmfiPCpRhp7iDH6L+XY56Icv2RRStSYrCVh8OnXTQ
Jan 23 09:14:49 np0005593293.novalocal systemd-logind[784]: New session 4 of user zuul.
Jan 23 09:14:49 np0005593293.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 23 09:14:49 np0005593293.novalocal sshd-session[7477]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:14:49 np0005593293.novalocal sudo[7505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnkidsiqsuyzscernohiosmlhmlixiyt ; /usr/bin/python3'
Jan 23 09:14:49 np0005593293.novalocal sudo[7505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:50 np0005593293.novalocal dnf[7479]: Failed determining last makecache time.
Jan 23 09:14:50 np0005593293.novalocal python3[7507]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-5353-1fb2-00000000217f-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:14:50 np0005593293.novalocal sudo[7505]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:50 np0005593293.novalocal dnf[7479]: CentOS Stream 9 - BaseOS                         50 kB/s | 6.7 kB     00:00
Jan 23 09:14:50 np0005593293.novalocal sudo[7538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrtjiwxkjrlaoixxmbyiseocrvtvafce ; /usr/bin/python3'
Jan 23 09:14:50 np0005593293.novalocal sudo[7538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:50 np0005593293.novalocal dnf[7479]: CentOS Stream 9 - AppStream                      68 kB/s | 6.8 kB     00:00
Jan 23 09:14:50 np0005593293.novalocal python3[7540]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:14:50 np0005593293.novalocal sudo[7538]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:50 np0005593293.novalocal sudo[7566]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmfjqnxgaonwdhyzunirnwgwmecrfwrv ; /usr/bin/python3'
Jan 23 09:14:50 np0005593293.novalocal sudo[7566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:50 np0005593293.novalocal dnf[7479]: CentOS Stream 9 - CRB                            54 kB/s | 6.6 kB     00:00
Jan 23 09:14:50 np0005593293.novalocal python3[7568]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:14:50 np0005593293.novalocal sudo[7566]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:50 np0005593293.novalocal sudo[7593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxysnsqtnjbmxzncmpfutyzwykrkrdby ; /usr/bin/python3'
Jan 23 09:14:50 np0005593293.novalocal sudo[7593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:50 np0005593293.novalocal dnf[7479]: CentOS Stream 9 - Extras packages                55 kB/s | 7.3 kB     00:00
Jan 23 09:14:51 np0005593293.novalocal python3[7595]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:14:51 np0005593293.novalocal sudo[7593]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:51 np0005593293.novalocal dnf[7479]: Metadata cache created.
Jan 23 09:14:51 np0005593293.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 23 09:14:51 np0005593293.novalocal systemd[1]: Finished dnf makecache.
Jan 23 09:14:51 np0005593293.novalocal sudo[7619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcpywxlaujisfewdbmnwwjgztftnfoyk ; /usr/bin/python3'
Jan 23 09:14:51 np0005593293.novalocal sudo[7619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:51 np0005593293.novalocal python3[7622]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:14:51 np0005593293.novalocal sudo[7619]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:51 np0005593293.novalocal sudo[7646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mygehnzzcqjqbhtekfjnqbbnycewaabw ; /usr/bin/python3'
Jan 23 09:14:51 np0005593293.novalocal sudo[7646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:52 np0005593293.novalocal python3[7648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:14:52 np0005593293.novalocal sudo[7646]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:52 np0005593293.novalocal sudo[7724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eypofdudnbrptdyudetqxehpmedukwcp ; /usr/bin/python3'
Jan 23 09:14:52 np0005593293.novalocal sudo[7724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:52 np0005593293.novalocal python3[7726]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:14:52 np0005593293.novalocal sudo[7724]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:52 np0005593293.novalocal sudo[7797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwfzcpdtrxbjuumllbzjedrgaomjdfwn ; /usr/bin/python3'
Jan 23 09:14:52 np0005593293.novalocal sudo[7797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:53 np0005593293.novalocal python3[7799]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159692.4793193-543-154756882482313/source _original_basename=tmpdpsl5apj follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:14:53 np0005593293.novalocal sudo[7797]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:54 np0005593293.novalocal sudo[7847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfwfmguvnogjliknploaknhkqshnaqbc ; /usr/bin/python3'
Jan 23 09:14:54 np0005593293.novalocal sudo[7847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:54 np0005593293.novalocal python3[7849]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 09:14:54 np0005593293.novalocal systemd[1]: Reloading.
Jan 23 09:14:54 np0005593293.novalocal systemd-rc-local-generator[7870]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:14:54 np0005593293.novalocal sudo[7847]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:56 np0005593293.novalocal sudo[7902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awfgvulkesaaygkngdklghabcfuglcmc ; /usr/bin/python3'
Jan 23 09:14:56 np0005593293.novalocal sudo[7902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:56 np0005593293.novalocal python3[7904]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 23 09:14:56 np0005593293.novalocal sudo[7902]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:56 np0005593293.novalocal sudo[7928]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huewssnwwxlrnetaowgxcfdcezzgzjhm ; /usr/bin/python3'
Jan 23 09:14:56 np0005593293.novalocal sudo[7928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:56 np0005593293.novalocal python3[7930]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:14:56 np0005593293.novalocal sudo[7928]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:57 np0005593293.novalocal sudo[7956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woyrvvtebzkgxduocgasmtswepinvyqp ; /usr/bin/python3'
Jan 23 09:14:57 np0005593293.novalocal sudo[7956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:57 np0005593293.novalocal python3[7958]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:14:57 np0005593293.novalocal sudo[7956]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:57 np0005593293.novalocal sudo[7984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmkjmokfuohlalwcdhthdjlyacdjwael ; /usr/bin/python3'
Jan 23 09:14:57 np0005593293.novalocal sudo[7984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:57 np0005593293.novalocal python3[7986]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:14:57 np0005593293.novalocal sudo[7984]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:57 np0005593293.novalocal sudo[8012]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawznnitowwbtpyoneaegjslhgdxvhii ; /usr/bin/python3'
Jan 23 09:14:57 np0005593293.novalocal sudo[8012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:14:57 np0005593293.novalocal python3[8014]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:14:57 np0005593293.novalocal sudo[8012]: pam_unix(sudo:session): session closed for user root
Jan 23 09:14:58 np0005593293.novalocal python3[8041]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-5353-1fb2-000000002186-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:15:00 np0005593293.novalocal python3[8070]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 09:15:03 np0005593293.novalocal sshd-session[7481]: Connection closed by 38.102.83.114 port 57028
Jan 23 09:15:03 np0005593293.novalocal sshd-session[7477]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:15:03 np0005593293.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 23 09:15:03 np0005593293.novalocal systemd[1]: session-4.scope: Consumed 4.321s CPU time.
Jan 23 09:15:03 np0005593293.novalocal systemd-logind[784]: Session 4 logged out. Waiting for processes to exit.
Jan 23 09:15:03 np0005593293.novalocal systemd-logind[784]: Removed session 4.
Jan 23 09:15:05 np0005593293.novalocal sshd-session[8077]: Accepted publickey for zuul from 38.102.83.114 port 39810 ssh2: RSA SHA256:/TrmfiPCpRhp7iDH6L+XY56Icv2RRStSYrCVh8OnXTQ
Jan 23 09:15:05 np0005593293.novalocal systemd-logind[784]: New session 5 of user zuul.
Jan 23 09:15:05 np0005593293.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 23 09:15:05 np0005593293.novalocal sshd-session[8077]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:15:05 np0005593293.novalocal sudo[8104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftdspbeiwzgkllmkrxdgyunehgybxmco ; /usr/bin/python3'
Jan 23 09:15:05 np0005593293.novalocal sudo[8104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:15:05 np0005593293.novalocal python3[8106]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 09:15:17 np0005593293.novalocal setsebool[8142]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 23 09:15:17 np0005593293.novalocal setsebool[8142]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 23 09:15:34 np0005593293.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 23 09:15:34 np0005593293.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 09:15:34 np0005593293.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 23 09:15:34 np0005593293.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 09:15:34 np0005593293.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 23 09:15:34 np0005593293.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 09:15:34 np0005593293.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 09:15:34 np0005593293.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 09:15:48 np0005593293.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 23 09:15:48 np0005593293.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 09:15:48 np0005593293.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 23 09:15:48 np0005593293.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 09:15:48 np0005593293.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 23 09:15:48 np0005593293.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 09:15:48 np0005593293.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 09:15:48 np0005593293.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 09:16:13 np0005593293.novalocal dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 23 09:16:13 np0005593293.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 09:16:13 np0005593293.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 23 09:16:13 np0005593293.novalocal systemd[1]: Reloading.
Jan 23 09:16:13 np0005593293.novalocal systemd-rc-local-generator[8910]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:16:13 np0005593293.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 09:16:15 np0005593293.novalocal sudo[8104]: pam_unix(sudo:session): session closed for user root
Jan 23 09:16:16 np0005593293.novalocal python3[10586]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-f136-f057-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:16:16 np0005593293.novalocal kernel: evm: overlay not supported
Jan 23 09:16:16 np0005593293.novalocal systemd[4304]: Starting D-Bus User Message Bus...
Jan 23 09:16:16 np0005593293.novalocal dbus-broker-launch[11765]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 23 09:16:16 np0005593293.novalocal dbus-broker-launch[11765]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 23 09:16:16 np0005593293.novalocal systemd[4304]: Started D-Bus User Message Bus.
Jan 23 09:16:16 np0005593293.novalocal dbus-broker-lau[11765]: Ready
Jan 23 09:16:16 np0005593293.novalocal systemd[4304]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 23 09:16:16 np0005593293.novalocal systemd[4304]: Created slice Slice /user.
Jan 23 09:16:16 np0005593293.novalocal systemd[4304]: podman-11607.scope: unit configures an IP firewall, but not running as root.
Jan 23 09:16:16 np0005593293.novalocal systemd[4304]: (This warning is only shown for the first unit using IP firewalling.)
Jan 23 09:16:16 np0005593293.novalocal systemd[4304]: Started podman-11607.scope.
Jan 23 09:16:17 np0005593293.novalocal systemd[4304]: Started podman-pause-3184ab7f.scope.
Jan 23 09:16:18 np0005593293.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Jan 23 09:16:18 np0005593293.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 23 09:16:18 np0005593293.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Jan 23 09:16:18 np0005593293.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 23 09:16:20 np0005593293.novalocal sshd-session[8080]: Connection closed by 38.102.83.114 port 39810
Jan 23 09:16:20 np0005593293.novalocal sshd-session[8077]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:16:20 np0005593293.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 23 09:16:20 np0005593293.novalocal systemd[1]: session-5.scope: Consumed 54.439s CPU time.
Jan 23 09:16:20 np0005593293.novalocal systemd-logind[784]: Session 5 logged out. Waiting for processes to exit.
Jan 23 09:16:20 np0005593293.novalocal systemd-logind[784]: Removed session 5.
Jan 23 09:16:36 np0005593293.novalocal sshd-session[20413]: Connection closed by 38.129.56.17 port 37446 [preauth]
Jan 23 09:16:36 np0005593293.novalocal sshd-session[20420]: Connection closed by 38.129.56.17 port 37458 [preauth]
Jan 23 09:16:36 np0005593293.novalocal sshd-session[20417]: Unable to negotiate with 38.129.56.17 port 37462: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 23 09:16:36 np0005593293.novalocal sshd-session[20419]: Unable to negotiate with 38.129.56.17 port 37464: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 23 09:16:36 np0005593293.novalocal sshd-session[20414]: Unable to negotiate with 38.129.56.17 port 37472: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 23 09:16:41 np0005593293.novalocal sshd-session[22414]: Accepted publickey for zuul from 38.102.83.114 port 59828 ssh2: RSA SHA256:/TrmfiPCpRhp7iDH6L+XY56Icv2RRStSYrCVh8OnXTQ
Jan 23 09:16:41 np0005593293.novalocal systemd-logind[784]: New session 6 of user zuul.
Jan 23 09:16:41 np0005593293.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 23 09:16:41 np0005593293.novalocal sshd-session[22414]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:16:41 np0005593293.novalocal python3[22526]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXU6aMT27gF+Yfs/YZWwo3YepWSGuQLHNXTSuo3za5wTzqiDdK4Z0aI/Vfz5yHXRMPrH9UNJkm8FGQwkK4yHMQ= zuul@np0005593292.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:16:42 np0005593293.novalocal sudo[22709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdhywjqaopiotlwcobmfcsjacqqrsxwl ; /usr/bin/python3'
Jan 23 09:16:42 np0005593293.novalocal sudo[22709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:16:42 np0005593293.novalocal python3[22720]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXU6aMT27gF+Yfs/YZWwo3YepWSGuQLHNXTSuo3za5wTzqiDdK4Z0aI/Vfz5yHXRMPrH9UNJkm8FGQwkK4yHMQ= zuul@np0005593292.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:16:42 np0005593293.novalocal sudo[22709]: pam_unix(sudo:session): session closed for user root
Jan 23 09:16:43 np0005593293.novalocal sudo[23057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bswbmczmxeohlhtovthqpxomnopdwtnx ; /usr/bin/python3'
Jan 23 09:16:43 np0005593293.novalocal sudo[23057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:16:43 np0005593293.novalocal python3[23066]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005593293.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 23 09:16:43 np0005593293.novalocal useradd[23233]: new group: name=cloud-admin, GID=1002
Jan 23 09:16:43 np0005593293.novalocal useradd[23233]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 23 09:16:43 np0005593293.novalocal sudo[23057]: pam_unix(sudo:session): session closed for user root
Jan 23 09:16:44 np0005593293.novalocal sudo[23575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aivilgppyveolbijjrxfknobcsgcrpti ; /usr/bin/python3'
Jan 23 09:16:44 np0005593293.novalocal sudo[23575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:16:44 np0005593293.novalocal python3[23584]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXU6aMT27gF+Yfs/YZWwo3YepWSGuQLHNXTSuo3za5wTzqiDdK4Z0aI/Vfz5yHXRMPrH9UNJkm8FGQwkK4yHMQ= zuul@np0005593292.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 09:16:44 np0005593293.novalocal sudo[23575]: pam_unix(sudo:session): session closed for user root
Jan 23 09:16:44 np0005593293.novalocal sudo[23851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsfulbhytypbrlsreatyyfrfwzlutijd ; /usr/bin/python3'
Jan 23 09:16:44 np0005593293.novalocal sudo[23851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:16:45 np0005593293.novalocal python3[23859]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:16:45 np0005593293.novalocal sudo[23851]: pam_unix(sudo:session): session closed for user root
Jan 23 09:16:45 np0005593293.novalocal sudo[24053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhmdrwzupnspalrljntexqbthnjatfui ; /usr/bin/python3'
Jan 23 09:16:45 np0005593293.novalocal sudo[24053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:16:45 np0005593293.novalocal python3[24055]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159804.7512932-150-212351820686926/source _original_basename=tmpxvavdreb follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:16:45 np0005593293.novalocal sudo[24053]: pam_unix(sudo:session): session closed for user root
Jan 23 09:16:46 np0005593293.novalocal sudo[24363]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-babepgvbeqcfmqxdfmkifcnksvgcoldl ; /usr/bin/python3'
Jan 23 09:16:46 np0005593293.novalocal sudo[24363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:16:46 np0005593293.novalocal python3[24370]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 23 09:16:46 np0005593293.novalocal systemd[1]: Starting Hostname Service...
Jan 23 09:16:46 np0005593293.novalocal systemd[1]: Started Hostname Service.
Jan 23 09:16:46 np0005593293.novalocal systemd-hostnamed[24415]: Changed pretty hostname to 'compute-0'
Jan 23 09:16:46 compute-0 systemd-hostnamed[24415]: Hostname set to <compute-0> (static)
Jan 23 09:16:46 compute-0 NetworkManager[7189]: <info>  [1769159806.7379] hostname: static hostname changed from "np0005593293.novalocal" to "compute-0"
Jan 23 09:16:46 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 09:16:46 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 09:16:46 compute-0 sudo[24363]: pam_unix(sudo:session): session closed for user root
Jan 23 09:16:47 compute-0 sshd-session[22463]: Connection closed by 38.102.83.114 port 59828
Jan 23 09:16:47 compute-0 sshd-session[22414]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:16:47 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 23 09:16:47 compute-0 systemd[1]: session-6.scope: Consumed 2.617s CPU time.
Jan 23 09:16:47 compute-0 systemd-logind[784]: Session 6 logged out. Waiting for processes to exit.
Jan 23 09:16:47 compute-0 systemd-logind[784]: Removed session 6.
Jan 23 09:16:56 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 09:17:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 09:17:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 09:17:13 compute-0 systemd[1]: man-db-cache-update.service: Consumed 57.001s CPU time.
Jan 23 09:17:13 compute-0 systemd[1]: run-rff8ac294ffef4caa88a4e817a8ae8195.service: Deactivated successfully.
Jan 23 09:17:16 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 09:21:46 compute-0 sshd-session[29904]: Accepted publickey for zuul from 38.129.56.17 port 54236 ssh2: RSA SHA256:/TrmfiPCpRhp7iDH6L+XY56Icv2RRStSYrCVh8OnXTQ
Jan 23 09:21:46 compute-0 systemd-logind[784]: New session 7 of user zuul.
Jan 23 09:21:46 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 23 09:21:46 compute-0 sshd-session[29904]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:21:47 compute-0 python3[29980]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:21:48 compute-0 sudo[30094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbahasjyskzkrzivclcaipdbylbgdtjm ; /usr/bin/python3'
Jan 23 09:21:48 compute-0 sudo[30094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:49 compute-0 python3[30096]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:21:49 compute-0 sudo[30094]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:49 compute-0 sudo[30167]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqidpajucmomuncldbqmatfpofhmfwwi ; /usr/bin/python3'
Jan 23 09:21:49 compute-0 sudo[30167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:49 compute-0 python3[30169]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769160108.6820278-34063-202265929960841/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:21:49 compute-0 sudo[30167]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:49 compute-0 sudo[30193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsmvsuogdhzztfmmzwwdvzhgesbxdltm ; /usr/bin/python3'
Jan 23 09:21:49 compute-0 sudo[30193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:49 compute-0 python3[30195]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:21:49 compute-0 sudo[30193]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:50 compute-0 sudo[30266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pisyynzqgruwtgocoebgrlkttaoouqlt ; /usr/bin/python3'
Jan 23 09:21:50 compute-0 sudo[30266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:50 compute-0 python3[30268]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769160108.6820278-34063-202265929960841/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:21:50 compute-0 sudo[30266]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:50 compute-0 sudo[30292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdrxthukoqbjqdgqhztbkorqxnrhmhfs ; /usr/bin/python3'
Jan 23 09:21:50 compute-0 sudo[30292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:50 compute-0 python3[30294]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:21:50 compute-0 sudo[30292]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:50 compute-0 sudo[30365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxiuasjjnruwprsrxpezpxmwlkioumhz ; /usr/bin/python3'
Jan 23 09:21:50 compute-0 sudo[30365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:50 compute-0 python3[30367]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769160108.6820278-34063-202265929960841/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:21:50 compute-0 sudo[30365]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:50 compute-0 sudo[30391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsaamzhclmtjjpvyduffknnkkcawvkla ; /usr/bin/python3'
Jan 23 09:21:50 compute-0 sudo[30391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:51 compute-0 python3[30393]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:21:51 compute-0 sudo[30391]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:51 compute-0 sudo[30464]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drxhtcllxfgzserlejkyeladtudxonjy ; /usr/bin/python3'
Jan 23 09:21:51 compute-0 sudo[30464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:51 compute-0 python3[30466]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769160108.6820278-34063-202265929960841/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:21:51 compute-0 sudo[30464]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:51 compute-0 sudo[30490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffnskbujllhomtgkiifockccwkmutsmq ; /usr/bin/python3'
Jan 23 09:21:51 compute-0 sudo[30490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:51 compute-0 python3[30492]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:21:51 compute-0 sudo[30490]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:52 compute-0 sudo[30563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drkpkqkqkdglukfhazfoyxnfxqgjpzhy ; /usr/bin/python3'
Jan 23 09:21:52 compute-0 sudo[30563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:52 compute-0 python3[30565]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769160108.6820278-34063-202265929960841/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:21:52 compute-0 sudo[30563]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:52 compute-0 sudo[30589]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujascaijavzqwafxozmgsrztuqvrrnbb ; /usr/bin/python3'
Jan 23 09:21:52 compute-0 sudo[30589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:52 compute-0 python3[30591]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:21:52 compute-0 sudo[30589]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:52 compute-0 sudo[30662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqdapvfmqytxzsrrlerzelizaobglmpf ; /usr/bin/python3'
Jan 23 09:21:52 compute-0 sudo[30662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:52 compute-0 python3[30664]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769160108.6820278-34063-202265929960841/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:21:52 compute-0 sudo[30662]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:52 compute-0 sudo[30688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnbtmjuvxtknzhmqkmhkebqrnvvtnlxk ; /usr/bin/python3'
Jan 23 09:21:52 compute-0 sudo[30688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:53 compute-0 python3[30690]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:21:53 compute-0 sudo[30688]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:53 compute-0 sudo[30761]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zczpaqtnwqwkeedjwnayqpfzyogvymgh ; /usr/bin/python3'
Jan 23 09:21:53 compute-0 sudo[30761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:21:53 compute-0 python3[30763]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769160108.6820278-34063-202265929960841/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:21:53 compute-0 sudo[30761]: pam_unix(sudo:session): session closed for user root
Jan 23 09:21:55 compute-0 sshd-session[30788]: Connection closed by 192.168.122.11 port 36856 [preauth]
Jan 23 09:21:55 compute-0 sshd-session[30791]: Connection closed by 192.168.122.11 port 36868 [preauth]
Jan 23 09:21:55 compute-0 sshd-session[30790]: Unable to negotiate with 192.168.122.11 port 36882: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 23 09:21:55 compute-0 sshd-session[30789]: Unable to negotiate with 192.168.122.11 port 36892: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 23 09:21:55 compute-0 sshd-session[30792]: Unable to negotiate with 192.168.122.11 port 36900: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 23 09:22:05 compute-0 python3[30821]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:27:04 compute-0 sshd-session[29907]: Received disconnect from 38.129.56.17 port 54236:11: disconnected by user
Jan 23 09:27:04 compute-0 sshd-session[29907]: Disconnected from user zuul 38.129.56.17 port 54236
Jan 23 09:27:04 compute-0 sshd-session[29904]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:27:04 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 23 09:27:04 compute-0 systemd[1]: session-7.scope: Consumed 5.364s CPU time.
Jan 23 09:27:04 compute-0 systemd-logind[784]: Session 7 logged out. Waiting for processes to exit.
Jan 23 09:27:04 compute-0 systemd-logind[784]: Removed session 7.
Jan 23 09:37:14 compute-0 sshd-session[30831]: Accepted publickey for zuul from 192.168.122.30 port 53668 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:37:14 compute-0 systemd-logind[784]: New session 8 of user zuul.
Jan 23 09:37:14 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 23 09:37:14 compute-0 sshd-session[30831]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:37:15 compute-0 python3.9[30984]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:37:16 compute-0 sudo[31163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsrcgjowwfbkgvyjwdguqinmmudmikzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161035.7066739-51-190704415562675/AnsiballZ_command.py'
Jan 23 09:37:16 compute-0 sudo[31163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:37:16 compute-0 python3.9[31165]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:37:28 compute-0 sudo[31163]: pam_unix(sudo:session): session closed for user root
Jan 23 09:37:29 compute-0 sshd-session[30834]: Connection closed by 192.168.122.30 port 53668
Jan 23 09:37:29 compute-0 sshd-session[30831]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:37:29 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 23 09:37:29 compute-0 systemd[1]: session-8.scope: Consumed 9.933s CPU time.
Jan 23 09:37:29 compute-0 systemd-logind[784]: Session 8 logged out. Waiting for processes to exit.
Jan 23 09:37:29 compute-0 systemd-logind[784]: Removed session 8.
Jan 23 09:37:46 compute-0 sshd-session[31223]: Accepted publickey for zuul from 192.168.122.30 port 46058 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:37:46 compute-0 systemd-logind[784]: New session 9 of user zuul.
Jan 23 09:37:46 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 23 09:37:46 compute-0 sshd-session[31223]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:37:47 compute-0 python3.9[31376]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 23 09:37:48 compute-0 python3.9[31550]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:37:49 compute-0 sudo[31700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lohozhmgnbkwaatunxrmbtcphtemzhpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161069.3211265-88-62682483958305/AnsiballZ_command.py'
Jan 23 09:37:49 compute-0 sudo[31700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:37:49 compute-0 python3.9[31702]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:37:49 compute-0 sudo[31700]: pam_unix(sudo:session): session closed for user root
Jan 23 09:37:51 compute-0 sudo[31853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xichlygmqwwcxlmkiohwneexwplpgsme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161070.4025257-124-187628577220957/AnsiballZ_stat.py'
Jan 23 09:37:51 compute-0 sudo[31853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:37:51 compute-0 python3.9[31855]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:37:51 compute-0 sudo[31853]: pam_unix(sudo:session): session closed for user root
Jan 23 09:37:51 compute-0 sudo[32005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cynbqwjsdeqzhegjieskxnqrkdagyxsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161071.5340943-148-216104611170984/AnsiballZ_file.py'
Jan 23 09:37:51 compute-0 sudo[32005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:37:52 compute-0 python3.9[32007]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:37:52 compute-0 sudo[32005]: pam_unix(sudo:session): session closed for user root
Jan 23 09:37:52 compute-0 sudo[32157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvpqltmcrimkdvjhbeiyuoaoqgarqaeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161072.408834-172-31134254000743/AnsiballZ_stat.py'
Jan 23 09:37:52 compute-0 sudo[32157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:37:52 compute-0 python3.9[32159]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:37:52 compute-0 sudo[32157]: pam_unix(sudo:session): session closed for user root
Jan 23 09:37:53 compute-0 sudo[32280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuznzwiwaexmemlzswuqyyfjauhyjolo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161072.408834-172-31134254000743/AnsiballZ_copy.py'
Jan 23 09:37:53 compute-0 sudo[32280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:37:53 compute-0 python3.9[32282]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161072.408834-172-31134254000743/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:37:53 compute-0 sudo[32280]: pam_unix(sudo:session): session closed for user root
Jan 23 09:37:54 compute-0 sudo[32432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgizjnndqiijwdclbrbutxzldiuqaoox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161074.1190178-217-209447494123185/AnsiballZ_setup.py'
Jan 23 09:37:54 compute-0 sudo[32432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:37:54 compute-0 python3.9[32434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:37:54 compute-0 sudo[32432]: pam_unix(sudo:session): session closed for user root
Jan 23 09:37:55 compute-0 sudo[32588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpunioxialyuecwxqtvmyuafsyyfjkfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161075.1367488-241-27462203200962/AnsiballZ_file.py'
Jan 23 09:37:55 compute-0 sudo[32588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:37:55 compute-0 python3.9[32590]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:37:55 compute-0 sudo[32588]: pam_unix(sudo:session): session closed for user root
Jan 23 09:37:56 compute-0 sudo[32740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcinxuwlnrhkrfqmpvooymnbmorvquxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161075.883125-268-47817079984026/AnsiballZ_file.py'
Jan 23 09:37:56 compute-0 sudo[32740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:37:56 compute-0 python3.9[32742]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:37:56 compute-0 sudo[32740]: pam_unix(sudo:session): session closed for user root
Jan 23 09:37:57 compute-0 python3.9[32892]: ansible-ansible.builtin.service_facts Invoked
Jan 23 09:38:00 compute-0 python3.9[33145]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:38:01 compute-0 python3.9[33295]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:38:03 compute-0 python3.9[33449]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:38:04 compute-0 sudo[33605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jydndqexdurggxsjafbgagpndxujkfbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161083.707526-412-153189304215595/AnsiballZ_setup.py'
Jan 23 09:38:04 compute-0 sudo[33605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:38:04 compute-0 python3.9[33607]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:38:04 compute-0 sudo[33605]: pam_unix(sudo:session): session closed for user root
Jan 23 09:38:05 compute-0 sudo[33689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhhzcdhojefqibmlmzxvrvmpnxjurssa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161083.707526-412-153189304215595/AnsiballZ_dnf.py'
Jan 23 09:38:05 compute-0 sudo[33689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:38:05 compute-0 python3.9[33691]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:39:03 compute-0 systemd[1]: Reloading.
Jan 23 09:39:04 compute-0 systemd-rc-local-generator[33889]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:39:04 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 23 09:39:04 compute-0 systemd[1]: Reloading.
Jan 23 09:39:04 compute-0 systemd-rc-local-generator[33931]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:39:04 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 23 09:39:04 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 23 09:39:04 compute-0 systemd[1]: Reloading.
Jan 23 09:39:04 compute-0 systemd-rc-local-generator[33971]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:39:05 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 23 09:39:05 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 23 09:39:05 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 23 09:39:05 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 23 09:40:28 compute-0 kernel: SELinux:  Converting 2722 SID table entries...
Jan 23 09:40:28 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 09:40:28 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 09:40:28 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 09:40:28 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 09:40:28 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 09:40:28 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 09:40:28 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 09:40:29 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 23 09:40:29 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 09:40:29 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 09:40:29 compute-0 systemd[1]: Reloading.
Jan 23 09:40:29 compute-0 systemd-rc-local-generator[34335]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:40:29 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 09:40:31 compute-0 sudo[33689]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 09:40:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 09:40:31 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.489s CPU time.
Jan 23 09:40:31 compute-0 systemd[1]: run-rb8b5bb0994d644d0a60f923fc4e1be92.service: Deactivated successfully.
Jan 23 09:40:31 compute-0 sudo[35244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etyahknustadltubvgugkjgtvzmggfxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161231.5907385-448-218247568128052/AnsiballZ_command.py'
Jan 23 09:40:31 compute-0 sudo[35244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:32 compute-0 python3.9[35246]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:40:33 compute-0 sudo[35244]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:34 compute-0 sudo[35525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxzoebppypuufikgsoryindsshnrbzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161233.452372-472-202392760360499/AnsiballZ_selinux.py'
Jan 23 09:40:34 compute-0 sudo[35525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:34 compute-0 python3.9[35527]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 23 09:40:34 compute-0 sudo[35525]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:35 compute-0 sudo[35677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltcxqasjjxfytogqkewnqepzjkmnvddb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161235.29799-505-179011267995758/AnsiballZ_command.py'
Jan 23 09:40:35 compute-0 sudo[35677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:35 compute-0 python3.9[35679]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 23 09:40:39 compute-0 sudo[35677]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:39 compute-0 sudo[35830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iknjvtkyqdgjiajbbawiqzgsfbjstqjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161239.472287-529-231135066134803/AnsiballZ_file.py'
Jan 23 09:40:39 compute-0 sudo[35830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:40 compute-0 python3.9[35832]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:40:40 compute-0 sudo[35830]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:41 compute-0 sudo[35982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joeqkfkdqhgyughaoaiyyhkusselfzkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161240.9630735-553-158318871597716/AnsiballZ_mount.py'
Jan 23 09:40:41 compute-0 sudo[35982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:41 compute-0 python3.9[35984]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 23 09:40:41 compute-0 sudo[35982]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:45 compute-0 sudo[36134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wglinuevhybmhbuztpsbnjpiyxadoyvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161245.4378524-637-76014476801660/AnsiballZ_file.py'
Jan 23 09:40:45 compute-0 sudo[36134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:45 compute-0 python3.9[36136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:40:45 compute-0 sudo[36134]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:49 compute-0 sudo[36286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnebfjezjeprvrxlsapniizuvuygdfpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161248.7694678-661-208009393118055/AnsiballZ_stat.py'
Jan 23 09:40:49 compute-0 sudo[36286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:49 compute-0 python3.9[36288]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:40:49 compute-0 sudo[36286]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:49 compute-0 sudo[36409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbvhiqlqqmdvzktifziaitnnbgirsfmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161248.7694678-661-208009393118055/AnsiballZ_copy.py'
Jan 23 09:40:49 compute-0 sudo[36409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:54 compute-0 python3.9[36411]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161248.7694678-661-208009393118055/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=022ad0c65ad9b9ad4d20c21b3609f531109c55bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:40:55 compute-0 sudo[36409]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:56 compute-0 sudo[36561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yznzdtoxvkmmsmshmsmkovcfvongrixb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161255.7841113-733-94711754332713/AnsiballZ_stat.py'
Jan 23 09:40:56 compute-0 sudo[36561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:56 compute-0 python3.9[36563]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:40:56 compute-0 sudo[36561]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:57 compute-0 sudo[36713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjrsjerpufokwnhfxfrfdamobgcpvoaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161256.469505-757-227837914897183/AnsiballZ_command.py'
Jan 23 09:40:57 compute-0 sudo[36713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:57 compute-0 python3.9[36715]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:40:57 compute-0 sudo[36713]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:57 compute-0 sudo[36866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhcovxbsyxvnltmqpjyywckizognoeit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161257.4813182-781-274178119227154/AnsiballZ_file.py'
Jan 23 09:40:57 compute-0 sudo[36866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:57 compute-0 python3.9[36868]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:40:57 compute-0 sudo[36866]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:58 compute-0 sudo[37018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxxghozhtfohewlhvxgbvnhsmdlzmykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161258.3972754-814-66808513180238/AnsiballZ_getent.py'
Jan 23 09:40:58 compute-0 sudo[37018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:40:59 compute-0 python3.9[37020]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 23 09:40:59 compute-0 sudo[37018]: pam_unix(sudo:session): session closed for user root
Jan 23 09:40:59 compute-0 sudo[37171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcjdjrlyizzafwjocthlnrahrdankgkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161259.3389742-838-240840615505515/AnsiballZ_group.py'
Jan 23 09:40:59 compute-0 sudo[37171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:00 compute-0 python3.9[37173]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 09:41:00 compute-0 groupadd[37174]: group added to /etc/group: name=qemu, GID=107
Jan 23 09:41:00 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 09:41:00 compute-0 groupadd[37174]: group added to /etc/gshadow: name=qemu
Jan 23 09:41:00 compute-0 groupadd[37174]: new group: name=qemu, GID=107
Jan 23 09:41:00 compute-0 sudo[37171]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:01 compute-0 sudo[37330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngxpjeqqtzqiwjhnrerghxtizwfsaeqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161260.625534-862-120481756951488/AnsiballZ_user.py'
Jan 23 09:41:01 compute-0 sudo[37330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:01 compute-0 python3.9[37332]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 09:41:01 compute-0 useradd[37334]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 09:41:01 compute-0 sudo[37330]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:02 compute-0 sudo[37490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwxeypwyeigzwbwhgcvmeeuanmufelnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161262.1665905-886-262967969655334/AnsiballZ_getent.py'
Jan 23 09:41:02 compute-0 sudo[37490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:02 compute-0 python3.9[37492]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 23 09:41:02 compute-0 sudo[37490]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:03 compute-0 sudo[37643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fawkqjtothmyqgyczimsrrnnjaingpim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161262.9479337-910-67987472242371/AnsiballZ_group.py'
Jan 23 09:41:03 compute-0 sudo[37643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:03 compute-0 python3.9[37645]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 09:41:03 compute-0 groupadd[37646]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 23 09:41:03 compute-0 groupadd[37646]: group added to /etc/gshadow: name=hugetlbfs
Jan 23 09:41:03 compute-0 groupadd[37646]: new group: name=hugetlbfs, GID=42477
Jan 23 09:41:03 compute-0 sudo[37643]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:04 compute-0 sudo[37801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeupwfmactevkmpswaefxugyqacbhjcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161264.0358238-937-45933153009207/AnsiballZ_file.py'
Jan 23 09:41:04 compute-0 sudo[37801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:04 compute-0 python3.9[37803]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 23 09:41:04 compute-0 sudo[37801]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:05 compute-0 sudo[37953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obqfwyznkrzprowqsozrsxzcbybntdoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161265.1753037-970-206813345333635/AnsiballZ_dnf.py'
Jan 23 09:41:05 compute-0 sudo[37953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:05 compute-0 python3.9[37955]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:41:10 compute-0 sudo[37953]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:11 compute-0 sudo[38107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgfltxxrionhlgtthcrvcyqonqvvpnwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161271.0333276-994-40557000660574/AnsiballZ_file.py'
Jan 23 09:41:11 compute-0 sudo[38107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:11 compute-0 python3.9[38109]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:41:11 compute-0 sudo[38107]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:12 compute-0 sudo[38259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzeuuvuvjtzlpudkpcznpkewqpsqyttx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161271.7179892-1018-47685535848882/AnsiballZ_stat.py'
Jan 23 09:41:12 compute-0 sudo[38259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:12 compute-0 python3.9[38261]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:41:12 compute-0 sudo[38259]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:12 compute-0 sudo[38382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppdubqaefpouyiianfejxmbbbveptdbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161271.7179892-1018-47685535848882/AnsiballZ_copy.py'
Jan 23 09:41:12 compute-0 sudo[38382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:12 compute-0 python3.9[38384]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769161271.7179892-1018-47685535848882/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:41:12 compute-0 sudo[38382]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:13 compute-0 sudo[38534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yafwnbrnkzsjkqxwndqgykoythvdksoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161272.9927676-1063-152010350371819/AnsiballZ_systemd.py'
Jan 23 09:41:13 compute-0 sudo[38534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:13 compute-0 python3.9[38536]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 09:41:13 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 23 09:41:13 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 23 09:41:13 compute-0 kernel: Bridge firewalling registered
Jan 23 09:41:13 compute-0 systemd-modules-load[38540]: Inserted module 'br_netfilter'
Jan 23 09:41:13 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 23 09:41:14 compute-0 sudo[38534]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:14 compute-0 sudo[38694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmajrhvygbbxbfhjgqsdtrxslwhirjcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161274.184879-1087-99410988686522/AnsiballZ_stat.py'
Jan 23 09:41:14 compute-0 sudo[38694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:14 compute-0 python3.9[38696]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:41:14 compute-0 sudo[38694]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:15 compute-0 sudo[38817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtcfflkfgothxosuvfkhlyefpuqzarcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161274.184879-1087-99410988686522/AnsiballZ_copy.py'
Jan 23 09:41:15 compute-0 sudo[38817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:15 compute-0 python3.9[38819]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769161274.184879-1087-99410988686522/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:41:15 compute-0 sudo[38817]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:16 compute-0 sudo[38969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hinwuehyyclwmmsjktsolrucjmtzaruz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161275.7458813-1141-25975264208475/AnsiballZ_dnf.py'
Jan 23 09:41:16 compute-0 sudo[38969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:16 compute-0 python3.9[38971]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:41:20 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 23 09:41:20 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 23 09:41:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 09:41:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 09:41:20 compute-0 systemd[1]: Reloading.
Jan 23 09:41:20 compute-0 systemd-rc-local-generator[39033]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:41:21 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 09:41:22 compute-0 sudo[38969]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:23 compute-0 python3.9[40372]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:41:23 compute-0 python3.9[41286]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 23 09:41:24 compute-0 python3.9[42016]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:41:25 compute-0 sudo[42889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-requagkoolktdzykywunseskifdelmns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161285.0051277-1258-31693677849604/AnsiballZ_command.py'
Jan 23 09:41:25 compute-0 sudo[42889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:25 compute-0 python3.9[42911]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:41:25 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 09:41:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 09:41:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 09:41:25 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.505s CPU time.
Jan 23 09:41:25 compute-0 systemd[1]: run-r3f0c9b78588e487b98203f371e16b62c.service: Deactivated successfully.
Jan 23 09:41:26 compute-0 systemd[1]: Starting Authorization Manager...
Jan 23 09:41:26 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 09:41:26 compute-0 polkitd[43358]: Started polkitd version 0.117
Jan 23 09:41:26 compute-0 polkitd[43358]: Loading rules from directory /etc/polkit-1/rules.d
Jan 23 09:41:26 compute-0 polkitd[43358]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 23 09:41:26 compute-0 polkitd[43358]: Finished loading, compiling and executing 2 rules
Jan 23 09:41:26 compute-0 systemd[1]: Started Authorization Manager.
Jan 23 09:41:26 compute-0 polkitd[43358]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 23 09:41:26 compute-0 sudo[42889]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:27 compute-0 sudo[43526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lilpxpyiarrvacfxuvofcfbeposvngow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161287.244134-1285-22456036018705/AnsiballZ_systemd.py'
Jan 23 09:41:27 compute-0 sudo[43526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:27 compute-0 python3.9[43528]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:41:27 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 23 09:41:27 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 23 09:41:27 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 23 09:41:28 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 09:41:28 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 09:41:28 compute-0 sudo[43526]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:28 compute-0 python3.9[43689]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 23 09:41:32 compute-0 sudo[43839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihcmjeeoqsxxnirvcxwpahllffjmenmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161292.1390967-1456-151338760796537/AnsiballZ_systemd.py'
Jan 23 09:41:32 compute-0 sudo[43839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:32 compute-0 python3.9[43841]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:41:32 compute-0 systemd[1]: Reloading.
Jan 23 09:41:32 compute-0 systemd-rc-local-generator[43868]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:41:33 compute-0 sudo[43839]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:33 compute-0 sudo[44028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sykrszfpigrbtddtpfdahtjezqefcehw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161293.2164118-1456-280869229413328/AnsiballZ_systemd.py'
Jan 23 09:41:33 compute-0 sudo[44028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:33 compute-0 python3.9[44030]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:41:33 compute-0 systemd[1]: Reloading.
Jan 23 09:41:33 compute-0 systemd-rc-local-generator[44060]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:41:34 compute-0 sudo[44028]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:34 compute-0 sudo[44217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cchjjbjspqewhtktdbfsodzxfapnbmmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161294.4388494-1504-64002153275971/AnsiballZ_command.py'
Jan 23 09:41:34 compute-0 sudo[44217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:34 compute-0 python3.9[44219]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:41:34 compute-0 sudo[44217]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:35 compute-0 sudo[44370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxiaqwguorfnqrdvtbncfwpmfeqoiwfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161295.3718944-1528-265461258755543/AnsiballZ_command.py'
Jan 23 09:41:35 compute-0 sudo[44370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:35 compute-0 python3.9[44372]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:41:35 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 23 09:41:35 compute-0 sudo[44370]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:36 compute-0 sudo[44523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwmawvtbgjubytxlpouaanwarjeibabp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161296.1395655-1552-76242234863442/AnsiballZ_command.py'
Jan 23 09:41:36 compute-0 sudo[44523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:36 compute-0 python3.9[44525]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:41:38 compute-0 sudo[44523]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:38 compute-0 sudo[44685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnkrllkteuutqgxemxzbkdjrqybssbvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161298.35185-1576-201298010923510/AnsiballZ_command.py'
Jan 23 09:41:38 compute-0 sudo[44685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:38 compute-0 python3.9[44687]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:41:38 compute-0 sudo[44685]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:39 compute-0 sudo[44838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxklvpgwgyvpnwhpszqfvdfusvtymizm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161299.1432116-1600-201944427115305/AnsiballZ_systemd.py'
Jan 23 09:41:39 compute-0 sudo[44838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:39 compute-0 python3.9[44840]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 09:41:39 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 23 09:41:39 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 23 09:41:39 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 23 09:41:39 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 23 09:41:39 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 23 09:41:39 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 23 09:41:39 compute-0 sudo[44838]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:40 compute-0 sshd-session[31226]: Connection closed by 192.168.122.30 port 46058
Jan 23 09:41:40 compute-0 sshd-session[31223]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:41:40 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 23 09:41:40 compute-0 systemd[1]: session-9.scope: Consumed 2min 38.652s CPU time.
Jan 23 09:41:40 compute-0 systemd-logind[784]: Session 9 logged out. Waiting for processes to exit.
Jan 23 09:41:40 compute-0 systemd-logind[784]: Removed session 9.
Jan 23 09:41:47 compute-0 sshd-session[44871]: Accepted publickey for zuul from 192.168.122.30 port 59948 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:41:47 compute-0 systemd-logind[784]: New session 10 of user zuul.
Jan 23 09:41:47 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 23 09:41:47 compute-0 sshd-session[44871]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:41:48 compute-0 python3.9[45024]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:41:49 compute-0 sudo[45178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwbiyydvjovxamrdktemhaweclhwats ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161309.188543-63-46805098347432/AnsiballZ_getent.py'
Jan 23 09:41:49 compute-0 sudo[45178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:50 compute-0 python3.9[45180]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 23 09:41:50 compute-0 sudo[45178]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:50 compute-0 sudo[45331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhtjpzimjprqncyfjnqyqarchquiloeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161310.447894-87-142720841356065/AnsiballZ_group.py'
Jan 23 09:41:50 compute-0 sudo[45331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:51 compute-0 python3.9[45333]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 09:41:51 compute-0 groupadd[45334]: group added to /etc/group: name=openvswitch, GID=42476
Jan 23 09:41:51 compute-0 groupadd[45334]: group added to /etc/gshadow: name=openvswitch
Jan 23 09:41:51 compute-0 groupadd[45334]: new group: name=openvswitch, GID=42476
Jan 23 09:41:51 compute-0 sudo[45331]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:52 compute-0 sudo[45489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgcwmnhgamytpknywgxbjbrxvakqutca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161311.878523-111-2431705611751/AnsiballZ_user.py'
Jan 23 09:41:52 compute-0 sudo[45489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:52 compute-0 python3.9[45491]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 09:41:53 compute-0 useradd[45493]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 09:41:53 compute-0 useradd[45493]: add 'openvswitch' to group 'hugetlbfs'
Jan 23 09:41:53 compute-0 useradd[45493]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 23 09:41:53 compute-0 sudo[45489]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:55 compute-0 sudo[45649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjlbspmujvtcnnqpztnfbvtieuhkimdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161315.4584453-141-25057598599647/AnsiballZ_setup.py'
Jan 23 09:41:55 compute-0 sudo[45649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:56 compute-0 python3.9[45651]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:41:56 compute-0 sudo[45649]: pam_unix(sudo:session): session closed for user root
Jan 23 09:41:56 compute-0 sudo[45733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzfxgfnfyhxinuuqrrgukpmwzbjnkfzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161315.4584453-141-25057598599647/AnsiballZ_dnf.py'
Jan 23 09:41:56 compute-0 sudo[45733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:41:56 compute-0 python3.9[45735]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 09:41:59 compute-0 sudo[45733]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:00 compute-0 sudo[45896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocgxuarcwaeedfhkjjlqhahrjfzdnnzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161320.1840882-183-171226484931704/AnsiballZ_dnf.py'
Jan 23 09:42:00 compute-0 sudo[45896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:42:00 compute-0 python3.9[45898]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:42:16 compute-0 kernel: SELinux:  Converting 2734 SID table entries...
Jan 23 09:42:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 09:42:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 09:42:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 09:42:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 09:42:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 09:42:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 09:42:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 09:42:16 compute-0 groupadd[45921]: group added to /etc/group: name=unbound, GID=994
Jan 23 09:42:16 compute-0 groupadd[45921]: group added to /etc/gshadow: name=unbound
Jan 23 09:42:16 compute-0 groupadd[45921]: new group: name=unbound, GID=994
Jan 23 09:42:16 compute-0 useradd[45928]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 23 09:42:17 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 23 09:42:17 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 23 09:42:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 09:42:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 09:42:20 compute-0 systemd[1]: Reloading.
Jan 23 09:42:20 compute-0 systemd-rc-local-generator[46424]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:42:20 compute-0 systemd-sysv-generator[46427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:42:20 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 09:42:23 compute-0 sudo[45896]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:24 compute-0 sudo[46993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxtphhgfwvmicyeuvczsmdmeufeiyhrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161343.9718585-207-24503156040238/AnsiballZ_systemd.py'
Jan 23 09:42:24 compute-0 sudo[46993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:42:24 compute-0 python3.9[46995]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 09:42:25 compute-0 systemd[1]: Reloading.
Jan 23 09:42:25 compute-0 systemd-rc-local-generator[47025]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:42:25 compute-0 systemd-sysv-generator[47028]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:42:25 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 23 09:42:25 compute-0 chown[47037]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 23 09:42:25 compute-0 ovs-ctl[47042]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 23 09:42:25 compute-0 ovs-ctl[47042]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 23 09:42:25 compute-0 ovs-ctl[47042]: Starting ovsdb-server [  OK  ]
Jan 23 09:42:25 compute-0 ovs-vsctl[47091]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 23 09:42:26 compute-0 ovs-vsctl[47110]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"57e418b8-f514-4483-8675-f32d2dcd8cea\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 23 09:42:26 compute-0 ovs-ctl[47042]: Configuring Open vSwitch system IDs [  OK  ]
Jan 23 09:42:26 compute-0 ovs-vsctl[47117]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 23 09:42:26 compute-0 ovs-ctl[47042]: Enabling remote OVSDB managers [  OK  ]
Jan 23 09:42:26 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 23 09:42:26 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 23 09:42:26 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 23 09:42:26 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 23 09:42:26 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 23 09:42:26 compute-0 ovs-ctl[47161]: Inserting openvswitch module [  OK  ]
Jan 23 09:42:26 compute-0 ovs-ctl[47130]: Starting ovs-vswitchd [  OK  ]
Jan 23 09:42:26 compute-0 ovs-vsctl[47178]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 23 09:42:26 compute-0 ovs-ctl[47130]: Enabling remote OVSDB managers [  OK  ]
Jan 23 09:42:26 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 23 09:42:26 compute-0 systemd[1]: Starting Open vSwitch...
Jan 23 09:42:26 compute-0 systemd[1]: Finished Open vSwitch.
Jan 23 09:42:26 compute-0 sudo[46993]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 09:42:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 09:42:27 compute-0 systemd[1]: run-re529f19afd2142408e04a0ac1a304d54.service: Deactivated successfully.
Jan 23 09:42:27 compute-0 python3.9[47331]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:42:28 compute-0 sudo[47481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixoongogpjlkvchsapyqixtjmmwtjmcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161347.822678-261-207376826053520/AnsiballZ_sefcontext.py'
Jan 23 09:42:28 compute-0 sudo[47481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:42:28 compute-0 python3.9[47483]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 23 09:42:30 compute-0 kernel: SELinux:  Converting 2748 SID table entries...
Jan 23 09:42:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 09:42:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 09:42:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 09:42:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 09:42:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 09:42:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 09:42:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 09:42:30 compute-0 sudo[47481]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:31 compute-0 python3.9[47638]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:42:32 compute-0 sudo[47794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adaopcccxeomlslnagxwwmqmtvqcjrut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161352.2744508-315-84303491858140/AnsiballZ_dnf.py'
Jan 23 09:42:32 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 23 09:42:32 compute-0 sudo[47794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:42:32 compute-0 python3.9[47796]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:42:34 compute-0 sudo[47794]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:35 compute-0 sudo[47947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reyovyetmzbclostrmaqfsvvyvxdjzzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161354.8448992-339-6911451103575/AnsiballZ_command.py'
Jan 23 09:42:35 compute-0 sudo[47947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:42:35 compute-0 python3.9[47949]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:42:36 compute-0 sudo[47947]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:37 compute-0 sudo[48234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyuuvefzbhkjimlzdlhyoagdhnjijuzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161356.6521873-363-274218091542007/AnsiballZ_file.py'
Jan 23 09:42:37 compute-0 sudo[48234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:42:37 compute-0 python3.9[48236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 23 09:42:37 compute-0 sudo[48234]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:38 compute-0 python3.9[48386]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:42:38 compute-0 sudo[48538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rouaggfjpgasvplfppqwbschwqxonaew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161358.4413266-411-264165235856412/AnsiballZ_dnf.py'
Jan 23 09:42:38 compute-0 sudo[48538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:42:38 compute-0 python3.9[48540]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:42:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 09:42:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 09:42:42 compute-0 systemd[1]: Reloading.
Jan 23 09:42:42 compute-0 systemd-rc-local-generator[48576]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:42:42 compute-0 systemd-sysv-generator[48582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:42:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 09:42:44 compute-0 sudo[48538]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 09:42:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 09:42:44 compute-0 systemd[1]: run-rd275d65e26be459ead48c4eac8fe4311.service: Deactivated successfully.
Jan 23 09:42:44 compute-0 sudo[48855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpqleeziepgibmavsztltkwcncrmdryn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161364.3830578-435-82184007779529/AnsiballZ_systemd.py'
Jan 23 09:42:44 compute-0 sudo[48855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:42:45 compute-0 python3.9[48857]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 09:42:45 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 23 09:42:45 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 23 09:42:45 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 23 09:42:45 compute-0 systemd[1]: Stopping Network Manager...
Jan 23 09:42:45 compute-0 NetworkManager[7189]: <info>  [1769161365.1779] caught SIGTERM, shutting down normally.
Jan 23 09:42:45 compute-0 NetworkManager[7189]: <info>  [1769161365.1794] dhcp4 (eth0): canceled DHCP transaction
Jan 23 09:42:45 compute-0 NetworkManager[7189]: <info>  [1769161365.1794] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 09:42:45 compute-0 NetworkManager[7189]: <info>  [1769161365.1795] dhcp4 (eth0): state changed no lease
Jan 23 09:42:45 compute-0 NetworkManager[7189]: <info>  [1769161365.1797] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 09:42:45 compute-0 NetworkManager[7189]: <info>  [1769161365.1864] exiting (success)
Jan 23 09:42:45 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 09:42:45 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 23 09:42:45 compute-0 systemd[1]: Stopped Network Manager.
Jan 23 09:42:45 compute-0 systemd[1]: NetworkManager.service: Consumed 17.869s CPU time, 4.1M memory peak, read 0B from disk, written 41.0K to disk.
Jan 23 09:42:45 compute-0 systemd[1]: Starting Network Manager...
Jan 23 09:42:45 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.2483] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:32ce9a2a-527f-4400-a04f-d4b7f74a7a70)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.2484] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.2554] manager[0x556437474000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 09:42:45 compute-0 systemd[1]: Starting Hostname Service...
Jan 23 09:42:45 compute-0 systemd[1]: Started Hostname Service.
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3414] hostname: hostname: using hostnamed
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3416] hostname: static hostname changed from (none) to "compute-0"
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3423] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3428] manager[0x556437474000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3428] manager[0x556437474000]: rfkill: WWAN hardware radio set enabled
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3453] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3464] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3464] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3465] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3465] manager: Networking is enabled by state file
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3468] settings: Loaded settings plugin: keyfile (internal)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3472] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3498] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3509] dhcp: init: Using DHCP client 'internal'
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3512] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3518] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3524] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3531] device (lo): Activation: starting connection 'lo' (0e3dd286-7fba-41e7-8d0b-2929e29deeb1)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3539] device (eth0): carrier: link connected
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3544] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3554] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3555] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3560] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3566] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3571] device (eth1): carrier: link connected
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3576] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3582] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (568f73b4-88ba-5ba3-8eca-ff7d1807a044) (indicated)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3582] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3587] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3594] device (eth1): Activation: starting connection 'ci-private-network' (568f73b4-88ba-5ba3-8eca-ff7d1807a044)
Jan 23 09:42:45 compute-0 systemd[1]: Started Network Manager.
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3610] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3621] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.3624] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4549] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4552] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4555] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4557] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4560] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4564] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4570] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4573] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4583] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4603] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4611] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4613] dhcp4 (eth0): state changed new lease, address=38.129.56.206
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4616] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4620] device (lo): Activation: successful, device activated.
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4631] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 09:42:45 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4743] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4750] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4754] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4757] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4758] device (eth1): Activation: successful, device activated.
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4773] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4775] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4780] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4782] device (eth0): Activation: successful, device activated.
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4788] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 09:42:45 compute-0 NetworkManager[48866]: <info>  [1769161365.4790] manager: startup complete
Jan 23 09:42:45 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 23 09:42:45 compute-0 sudo[48855]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:45 compute-0 sudo[49081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtjionhercpdjyusbvzgxnxscxpaxilr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161365.6850533-459-262801965084988/AnsiballZ_dnf.py'
Jan 23 09:42:45 compute-0 sudo[49081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:42:46 compute-0 python3.9[49083]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:42:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 09:42:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 09:42:54 compute-0 systemd[1]: Reloading.
Jan 23 09:42:54 compute-0 systemd-rc-local-generator[49134]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:42:54 compute-0 systemd-sysv-generator[49140]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:42:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 09:42:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 09:42:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 09:42:55 compute-0 systemd[1]: run-r32ba2de8d3064580ae614d75a1031ba4.service: Deactivated successfully.
Jan 23 09:42:55 compute-0 sudo[49081]: pam_unix(sudo:session): session closed for user root
Jan 23 09:42:55 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 09:43:00 compute-0 sudo[49542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaqgiapsxlncxirspiegnjtilyqlobfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161379.8513753-495-204396807082122/AnsiballZ_stat.py'
Jan 23 09:43:00 compute-0 sudo[49542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:00 compute-0 python3.9[49544]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:43:00 compute-0 sudo[49542]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:00 compute-0 sudo[49694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvfdggqzwiblgabfwacxytgyscgjgjqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161380.565775-522-74118192559547/AnsiballZ_ini_file.py'
Jan 23 09:43:00 compute-0 sudo[49694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:01 compute-0 python3.9[49696]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:01 compute-0 sudo[49694]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:01 compute-0 sudo[49848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofwebougqjaywtdkltmowmwtffqomhmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161381.5366-552-240872449716909/AnsiballZ_ini_file.py'
Jan 23 09:43:01 compute-0 sudo[49848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:01 compute-0 python3.9[49850]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:02 compute-0 sudo[49848]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:02 compute-0 sudo[50000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znvkzdyhnsxznhjmcdtmdgjhpgvygxlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161382.1348035-552-163577298154772/AnsiballZ_ini_file.py'
Jan 23 09:43:02 compute-0 sudo[50000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:02 compute-0 python3.9[50002]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:02 compute-0 sudo[50000]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:03 compute-0 sudo[50152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxnubhriqwnjxhqfnzuehmidujgcuvbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161382.916411-597-33727445023091/AnsiballZ_ini_file.py'
Jan 23 09:43:03 compute-0 sudo[50152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:03 compute-0 python3.9[50154]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:03 compute-0 sudo[50152]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:03 compute-0 sudo[50304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpiradkrewsvbomfdipxuudpbxfyvplv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161383.536406-597-106513966435955/AnsiballZ_ini_file.py'
Jan 23 09:43:03 compute-0 sudo[50304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:04 compute-0 python3.9[50306]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:04 compute-0 sudo[50304]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:04 compute-0 sudo[50456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhjjlxizsbabeygyhuwwfdlmmvqnzvla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161384.2208312-642-122817305085485/AnsiballZ_stat.py'
Jan 23 09:43:04 compute-0 sudo[50456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:04 compute-0 python3.9[50458]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:43:04 compute-0 sudo[50456]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:05 compute-0 sudo[50579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdcpkgpxtazyqhaqdlunnqtmhhejizke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161384.2208312-642-122817305085485/AnsiballZ_copy.py'
Jan 23 09:43:05 compute-0 sudo[50579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:05 compute-0 python3.9[50581]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161384.2208312-642-122817305085485/.source _original_basename=.3kmne5r1 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:05 compute-0 sudo[50579]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:05 compute-0 sudo[50731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlvuhipysovswvjasdrclkexyzbsogtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161385.6888473-687-5610127820653/AnsiballZ_file.py'
Jan 23 09:43:05 compute-0 sudo[50731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:06 compute-0 python3.9[50733]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:06 compute-0 sudo[50731]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:06 compute-0 sudo[50883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrhdmnilcquprxpknuqxbrsiidggjdeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161386.361109-711-146433421334451/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 23 09:43:06 compute-0 sudo[50883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:07 compute-0 python3.9[50885]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 23 09:43:07 compute-0 sudo[50883]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:07 compute-0 sudo[51035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntsafnjzvnoblwdocustapzpmlltinlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161387.2991629-738-216491585461248/AnsiballZ_file.py'
Jan 23 09:43:07 compute-0 sudo[51035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:07 compute-0 python3.9[51037]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:07 compute-0 sudo[51035]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:08 compute-0 sudo[51187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isftabndovkxpmlgviohpdkepyagnvwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161388.2180438-768-129390884537593/AnsiballZ_stat.py'
Jan 23 09:43:08 compute-0 sudo[51187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:08 compute-0 sudo[51187]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:09 compute-0 sudo[51310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyyokkysagggugfzyzgjcsxinjxgyfgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161388.2180438-768-129390884537593/AnsiballZ_copy.py'
Jan 23 09:43:09 compute-0 sudo[51310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:09 compute-0 sudo[51310]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:10 compute-0 sudo[51462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfehqopqqeajenuyvzuypupblmjyrudr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161389.7398472-813-179487015142351/AnsiballZ_slurp.py'
Jan 23 09:43:10 compute-0 sudo[51462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:10 compute-0 python3.9[51464]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 23 09:43:10 compute-0 sudo[51462]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:11 compute-0 sudo[51637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzpusahsijfjbwjgeseclzxyfbwrvsod ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161390.6520872-840-169058876903927/async_wrapper.py j294829377543 300 /home/zuul/.ansible/tmp/ansible-tmp-1769161390.6520872-840-169058876903927/AnsiballZ_edpm_os_net_config.py _'
Jan 23 09:43:11 compute-0 sudo[51637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:11 compute-0 ansible-async_wrapper.py[51639]: Invoked with j294829377543 300 /home/zuul/.ansible/tmp/ansible-tmp-1769161390.6520872-840-169058876903927/AnsiballZ_edpm_os_net_config.py _
Jan 23 09:43:11 compute-0 ansible-async_wrapper.py[51642]: Starting module and watcher
Jan 23 09:43:11 compute-0 ansible-async_wrapper.py[51642]: Start watching 51643 (300)
Jan 23 09:43:11 compute-0 ansible-async_wrapper.py[51643]: Start module (51643)
Jan 23 09:43:11 compute-0 ansible-async_wrapper.py[51639]: Return async_wrapper task started.
Jan 23 09:43:11 compute-0 sudo[51637]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:11 compute-0 python3.9[51644]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 23 09:43:12 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 23 09:43:12 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 23 09:43:12 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 23 09:43:12 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 23 09:43:12 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.8993] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51645 uid=0 result="success"
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9028] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51645 uid=0 result="success"
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9829] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9831] audit: op="connection-add" uuid="f53589c8-899b-4a5c-8dfc-65f6daa523e6" name="br-ex-br" pid=51645 uid=0 result="success"
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9855] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9866] audit: op="connection-add" uuid="c1f7ee53-024a-4898-bb05-29bff4f6719d" name="br-ex-port" pid=51645 uid=0 result="success"
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9889] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9902] audit: op="connection-add" uuid="2a703088-f0c0-47d0-b83a-3673b724f594" name="eth1-port" pid=51645 uid=0 result="success"
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9921] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9923] audit: op="connection-add" uuid="29196391-40c6-43d3-9270-03ef07837a42" name="vlan20-port" pid=51645 uid=0 result="success"
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9941] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9943] audit: op="connection-add" uuid="dfee9cdd-ae19-4b78-877d-715b2d17814c" name="vlan21-port" pid=51645 uid=0 result="success"
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9965] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9966] audit: op="connection-add" uuid="2b4ece5b-6aa4-451d-a907-b1dbc2c25c4a" name="vlan22-port" pid=51645 uid=0 result="success"
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9992] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 23 09:43:13 compute-0 NetworkManager[48866]: <info>  [1769161393.9993] audit: op="connection-add" uuid="f3c0a700-ffd4-4b2b-940f-28b118007327" name="vlan23-port" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0022] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0044] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0046] audit: op="connection-add" uuid="e6912985-6cc0-42f9-a29a-d134ee109775" name="br-ex-if" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0388] audit: op="connection-update" uuid="568f73b4-88ba-5ba3-8eca-ff7d1807a044" name="ci-private-network" args="ovs-interface.type,ipv4.dns,ipv4.routing-rules,ipv4.addresses,ipv4.never-default,ipv4.method,ipv4.routes,ovs-external-ids.data,connection.slave-type,connection.controller,connection.master,connection.port-type,connection.timestamp,ipv6.dns,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ipv6.routes" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0420] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0422] audit: op="connection-add" uuid="b9e6c7d6-7077-4b94-82c0-0899706ec720" name="vlan20-if" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0442] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0444] audit: op="connection-add" uuid="5c239b4f-74dc-4338-8203-82747e89a8b9" name="vlan21-if" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0468] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0470] audit: op="connection-add" uuid="1a2e1868-7839-4f68-8e2c-f0dee8f55feb" name="vlan22-if" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0498] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0500] audit: op="connection-add" uuid="ae6f34b0-38c2-415a-95c4-f41925a511b1" name="vlan23-if" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0519] audit: op="connection-delete" uuid="80f800fd-9bd3-3b41-8339-5a455d46d8c5" name="Wired connection 1" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0536] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0539] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0548] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0552] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f53589c8-899b-4a5c-8dfc-65f6daa523e6)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0553] audit: op="connection-activate" uuid="f53589c8-899b-4a5c-8dfc-65f6daa523e6" name="br-ex-br" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0556] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0557] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0565] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0570] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (c1f7ee53-024a-4898-bb05-29bff4f6719d)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0572] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0574] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0579] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0583] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (2a703088-f0c0-47d0-b83a-3673b724f594)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0585] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0586] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0592] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0597] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (29196391-40c6-43d3-9270-03ef07837a42)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0599] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0600] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0607] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0612] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (dfee9cdd-ae19-4b78-877d-715b2d17814c)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0615] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0616] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0621] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0626] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (2b4ece5b-6aa4-451d-a907-b1dbc2c25c4a)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0629] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0630] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0635] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0640] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (f3c0a700-ffd4-4b2b-940f-28b118007327)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0642] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0645] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0646] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0653] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0654] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0657] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0661] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e6912985-6cc0-42f9-a29a-d134ee109775)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0662] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0666] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0667] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0669] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0670] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0682] device (eth1): disconnecting for new activation request.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0683] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0686] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0688] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0747] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0752] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0755] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0758] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0762] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (b9e6c7d6-7077-4b94-82c0-0899706ec720)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0763] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0766] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0768] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0769] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0772] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0773] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0775] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0778] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (5c239b4f-74dc-4338-8203-82747e89a8b9)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0779] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0781] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0783] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0784] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0786] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0787] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0789] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0792] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (1a2e1868-7839-4f68-8e2c-f0dee8f55feb)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0793] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0795] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0796] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0797] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0798] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <warn>  [1769161394.0799] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0801] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0805] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (ae6f34b0-38c2-415a-95c4-f41925a511b1)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0806] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0808] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0810] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0811] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0812] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0824] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0825] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0828] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0829] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0836] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0840] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0843] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0846] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0847] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0851] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0854] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0856] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0858] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0862] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0865] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0868] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0869] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0887] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 systemd-udevd[51649]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 09:43:14 compute-0 kernel: Timeout policy base is empty
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0892] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0898] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0906] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0915] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0924] dhcp4 (eth0): canceled DHCP transaction
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0924] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0924] dhcp4 (eth0): state changed no lease
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0926] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0936] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0945] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51645 uid=0 result="fail" reason="Device is not activated"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0949] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0956] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0963] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0970] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.0972] dhcp4 (eth0): state changed new lease, address=38.129.56.206
Jan 23 09:43:14 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 09:43:14 compute-0 kernel: br-ex: entered promiscuous mode
Jan 23 09:43:14 compute-0 kernel: vlan22: entered promiscuous mode
Jan 23 09:43:14 compute-0 systemd-udevd[51650]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 09:43:14 compute-0 kernel: vlan20: entered promiscuous mode
Jan 23 09:43:14 compute-0 kernel: vlan23: entered promiscuous mode
Jan 23 09:43:14 compute-0 kernel: vlan21: entered promiscuous mode
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3639] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3649] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3676] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3684] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3690] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3695] device (eth1): disconnecting for new activation request.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3696] audit: op="connection-activate" uuid="568f73b4-88ba-5ba3-8eca-ff7d1807a044" name="ci-private-network" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3696] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3901] device (eth1): Activation: starting connection 'ci-private-network' (568f73b4-88ba-5ba3-8eca-ff7d1807a044)
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3906] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3907] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3908] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3909] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3910] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3911] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3912] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3935] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3937] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3943] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3947] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3951] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3953] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3955] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3957] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3960] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3962] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3965] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3967] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3970] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3972] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3975] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3977] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3980] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51645 uid=0 result="success"
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.3993] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4016] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4021] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4026] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4031] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4036] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4040] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4050] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4054] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4055] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4060] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4065] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4066] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4067] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4068] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4071] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4074] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4077] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4081] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4086] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4091] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4093] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4094] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4098] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4104] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.4117] device (eth1): Activation: successful, device activated.
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.8236] checkpoint[0x55643744a950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 23 09:43:14 compute-0 NetworkManager[48866]: <info>  [1769161394.8239] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51645 uid=0 result="success"
Jan 23 09:43:15 compute-0 sudo[52003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbmkcbfzhyewwqzhfypdmuksjvgejtxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161394.680989-840-1265456499088/AnsiballZ_async_status.py'
Jan 23 09:43:15 compute-0 sudo[52003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:15 compute-0 NetworkManager[48866]: <info>  [1769161395.1976] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51645 uid=0 result="success"
Jan 23 09:43:15 compute-0 NetworkManager[48866]: <info>  [1769161395.1989] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51645 uid=0 result="success"
Jan 23 09:43:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 09:43:15 compute-0 python3.9[52005]: ansible-ansible.legacy.async_status Invoked with jid=j294829377543.51639 mode=status _async_dir=/root/.ansible_async
Jan 23 09:43:15 compute-0 sudo[52003]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:15 compute-0 NetworkManager[48866]: <info>  [1769161395.4605] audit: op="networking-control" arg="global-dns-configuration" pid=51645 uid=0 result="success"
Jan 23 09:43:15 compute-0 NetworkManager[48866]: <info>  [1769161395.4657] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 23 09:43:15 compute-0 NetworkManager[48866]: <info>  [1769161395.4711] audit: op="networking-control" arg="global-dns-configuration" pid=51645 uid=0 result="success"
Jan 23 09:43:15 compute-0 NetworkManager[48866]: <info>  [1769161395.4742] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51645 uid=0 result="success"
Jan 23 09:43:15 compute-0 NetworkManager[48866]: <info>  [1769161395.6541] checkpoint[0x55643744aa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 23 09:43:15 compute-0 NetworkManager[48866]: <info>  [1769161395.6547] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51645 uid=0 result="success"
Jan 23 09:43:15 compute-0 ansible-async_wrapper.py[51643]: Module complete (51643)
Jan 23 09:43:16 compute-0 ansible-async_wrapper.py[51642]: Done in kid B.
Jan 23 09:43:18 compute-0 sudo[52109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtjfebhiwidslvpevtaahosmgmipwafs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161394.680989-840-1265456499088/AnsiballZ_async_status.py'
Jan 23 09:43:18 compute-0 sudo[52109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:19 compute-0 python3.9[52111]: ansible-ansible.legacy.async_status Invoked with jid=j294829377543.51639 mode=status _async_dir=/root/.ansible_async
Jan 23 09:43:19 compute-0 sudo[52109]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:19 compute-0 sudo[52209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wahzpwxjdejubcpylpkclmbwkznvnucz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161394.680989-840-1265456499088/AnsiballZ_async_status.py'
Jan 23 09:43:19 compute-0 sudo[52209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:19 compute-0 python3.9[52211]: ansible-ansible.legacy.async_status Invoked with jid=j294829377543.51639 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 09:43:19 compute-0 sudo[52209]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:20 compute-0 sudo[52361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnriksmzdhlfxqwjhwrjizohvkpdmeaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161399.759635-921-52329442956310/AnsiballZ_stat.py'
Jan 23 09:43:20 compute-0 sudo[52361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:20 compute-0 python3.9[52363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:43:20 compute-0 sudo[52361]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:20 compute-0 sudo[52484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fruhplicqrymmkwsohnvquuxeqpjtrkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161399.759635-921-52329442956310/AnsiballZ_copy.py'
Jan 23 09:43:20 compute-0 sudo[52484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:20 compute-0 python3.9[52486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161399.759635-921-52329442956310/.source.returncode _original_basename=.t4lumh39 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:20 compute-0 sudo[52484]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:21 compute-0 sudo[52636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-proyjgnsqgtjooraiahqurhfhcgaonlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161401.1089542-969-51268292887929/AnsiballZ_stat.py'
Jan 23 09:43:21 compute-0 sudo[52636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:21 compute-0 python3.9[52638]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:43:21 compute-0 sudo[52636]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:22 compute-0 sudo[52759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfsuwoprmedcgabyhjvjpqyeulzgerft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161401.1089542-969-51268292887929/AnsiballZ_copy.py'
Jan 23 09:43:22 compute-0 sudo[52759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:22 compute-0 python3.9[52761]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161401.1089542-969-51268292887929/.source.cfg _original_basename=.qjntam10 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:22 compute-0 sudo[52759]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:22 compute-0 sudo[52912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygsnrnwebyksbxlbvxthgzzefxbumvia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161402.4314125-1014-91915750955482/AnsiballZ_systemd.py'
Jan 23 09:43:22 compute-0 sudo[52912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:23 compute-0 python3.9[52914]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 09:43:23 compute-0 systemd[1]: Reloading Network Manager...
Jan 23 09:43:23 compute-0 NetworkManager[48866]: <info>  [1769161403.1263] audit: op="reload" arg="0" pid=52918 uid=0 result="success"
Jan 23 09:43:23 compute-0 NetworkManager[48866]: <info>  [1769161403.1270] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 23 09:43:23 compute-0 systemd[1]: Reloaded Network Manager.
Jan 23 09:43:23 compute-0 sudo[52912]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:23 compute-0 sshd-session[44874]: Connection closed by 192.168.122.30 port 59948
Jan 23 09:43:23 compute-0 sshd-session[44871]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:43:23 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 23 09:43:23 compute-0 systemd[1]: session-10.scope: Consumed 57.977s CPU time.
Jan 23 09:43:23 compute-0 systemd-logind[784]: Session 10 logged out. Waiting for processes to exit.
Jan 23 09:43:23 compute-0 systemd-logind[784]: Removed session 10.
Jan 23 09:43:29 compute-0 sshd-session[52950]: Accepted publickey for zuul from 192.168.122.30 port 50442 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:43:29 compute-0 systemd-logind[784]: New session 11 of user zuul.
Jan 23 09:43:29 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 23 09:43:29 compute-0 sshd-session[52950]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:43:30 compute-0 python3.9[53103]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:43:32 compute-0 python3.9[53257]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:43:33 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 09:43:33 compute-0 python3.9[53453]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:43:33 compute-0 sshd-session[52953]: Connection closed by 192.168.122.30 port 50442
Jan 23 09:43:33 compute-0 sshd-session[52950]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:43:33 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 23 09:43:33 compute-0 systemd[1]: session-11.scope: Consumed 2.409s CPU time.
Jan 23 09:43:33 compute-0 systemd-logind[784]: Session 11 logged out. Waiting for processes to exit.
Jan 23 09:43:33 compute-0 systemd-logind[784]: Removed session 11.
Jan 23 09:43:39 compute-0 sshd-session[53481]: Accepted publickey for zuul from 192.168.122.30 port 45732 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:43:39 compute-0 systemd-logind[784]: New session 12 of user zuul.
Jan 23 09:43:39 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 23 09:43:39 compute-0 sshd-session[53481]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:43:40 compute-0 python3.9[53634]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:43:41 compute-0 python3.9[53788]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:43:42 compute-0 sudo[53943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edzpsykdszpjfornhtqijyfdlvderhdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161421.9481359-75-240135753877854/AnsiballZ_setup.py'
Jan 23 09:43:42 compute-0 sudo[53943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:42 compute-0 python3.9[53945]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:43:42 compute-0 sudo[53943]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:43 compute-0 sudo[54027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evxnqubzhfvczbkxjxjzohcdbdhkkwmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161421.9481359-75-240135753877854/AnsiballZ_dnf.py'
Jan 23 09:43:43 compute-0 sudo[54027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:43 compute-0 python3.9[54029]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:43:45 compute-0 sudo[54027]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:45 compute-0 sudo[54181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psxgpnlynhgninwokuvqbauwaxgkcdzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161425.6547928-111-28999815110339/AnsiballZ_setup.py'
Jan 23 09:43:45 compute-0 sudo[54181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:46 compute-0 python3.9[54183]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:43:46 compute-0 sudo[54181]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:47 compute-0 sudo[54376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unxuwpokcdythcwthdlngjcgiwftqavo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161426.895612-144-103044756826773/AnsiballZ_file.py'
Jan 23 09:43:47 compute-0 sudo[54376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:47 compute-0 python3.9[54378]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:47 compute-0 sudo[54376]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:48 compute-0 sudo[54528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaoatddefthkwzangnckxfrwdgfpsfmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161427.672538-168-269500574238095/AnsiballZ_command.py'
Jan 23 09:43:48 compute-0 sudo[54528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:48 compute-0 python3.9[54530]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:43:48 compute-0 podman[54531]: 2026-01-23 09:43:48.465010579 +0000 UTC m=+0.068158584 system refresh
Jan 23 09:43:48 compute-0 sudo[54528]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:49 compute-0 sudo[54691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azztmmxvgfiojxskgaofnqvfcejhamko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161428.6541924-192-260358022804531/AnsiballZ_stat.py'
Jan 23 09:43:49 compute-0 sudo[54691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:43:49 compute-0 python3.9[54693]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:43:49 compute-0 sudo[54691]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:50 compute-0 sudo[54814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pivvthffcogjokaksupcrdhbqapjlejw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161428.6541924-192-260358022804531/AnsiballZ_copy.py'
Jan 23 09:43:50 compute-0 sudo[54814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:50 compute-0 python3.9[54816]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161428.6541924-192-260358022804531/.source.json follow=False _original_basename=podman_network_config.j2 checksum=b669d1b580391af766f951ed824d55740cbe1a6a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:43:50 compute-0 sudo[54814]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:50 compute-0 sudo[54966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejmwnwjlxhvwsbyiyklehjqjtzvdorqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161430.5220335-237-147089041336324/AnsiballZ_stat.py'
Jan 23 09:43:50 compute-0 sudo[54966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:50 compute-0 python3.9[54968]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:43:51 compute-0 sudo[54966]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:51 compute-0 sudo[55089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azugdtwqcnlwpwyqkdlldosvxyxvqlso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161430.5220335-237-147089041336324/AnsiballZ_copy.py'
Jan 23 09:43:51 compute-0 sudo[55089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:51 compute-0 python3.9[55091]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769161430.5220335-237-147089041336324/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:43:51 compute-0 sudo[55089]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:52 compute-0 sudo[55241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxognwgjoxycswghlngiimspbxjjtjxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161431.959632-285-254017454421480/AnsiballZ_ini_file.py'
Jan 23 09:43:52 compute-0 sudo[55241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:52 compute-0 python3.9[55243]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:43:52 compute-0 sudo[55241]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:53 compute-0 sudo[55393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajlyoxxbkvukjcftgphpnefiagsotcwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161432.783505-285-218960184250151/AnsiballZ_ini_file.py'
Jan 23 09:43:53 compute-0 sudo[55393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:53 compute-0 python3.9[55395]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:43:53 compute-0 sudo[55393]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:53 compute-0 sudo[55545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbqfeukdkrpphdljjjfhfelhirmyvwja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161433.4431295-285-189936796054755/AnsiballZ_ini_file.py'
Jan 23 09:43:53 compute-0 sudo[55545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:53 compute-0 python3.9[55547]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:43:53 compute-0 sudo[55545]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:54 compute-0 sudo[55697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chbqzvbibrytdexubpnlhyuhwoqcknyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161434.0618765-285-151398458577757/AnsiballZ_ini_file.py'
Jan 23 09:43:54 compute-0 sudo[55697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:54 compute-0 python3.9[55699]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:43:54 compute-0 sudo[55697]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:55 compute-0 sudo[55849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlvcgapyvwzcgglvxqnyohmcblfbgzfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161434.8621683-378-161571110911101/AnsiballZ_dnf.py'
Jan 23 09:43:55 compute-0 sudo[55849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:55 compute-0 python3.9[55851]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:43:56 compute-0 sudo[55849]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:57 compute-0 sudo[56002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjzgwmaybvdpozcrgmnoupzrivepjsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161437.4079394-411-273863517162731/AnsiballZ_setup.py'
Jan 23 09:43:57 compute-0 sudo[56002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:58 compute-0 python3.9[56004]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:43:58 compute-0 sudo[56002]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:58 compute-0 sudo[56156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgfvhrlbzssifodbxzrfqatyckgczwyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161438.2414775-435-197769195119735/AnsiballZ_stat.py'
Jan 23 09:43:58 compute-0 sudo[56156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:58 compute-0 python3.9[56158]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:43:58 compute-0 sudo[56156]: pam_unix(sudo:session): session closed for user root
Jan 23 09:43:59 compute-0 sudo[56308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egklboklrixpwoabjtsqwxypwenkupyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161438.983938-462-265913082243750/AnsiballZ_stat.py'
Jan 23 09:43:59 compute-0 sudo[56308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:43:59 compute-0 python3.9[56310]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:43:59 compute-0 sudo[56308]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:00 compute-0 sudo[56460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rketwueshlojcazcgzigeieerxmjzhdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161439.7800598-492-120904173706980/AnsiballZ_command.py'
Jan 23 09:44:00 compute-0 sudo[56460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:00 compute-0 python3.9[56462]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:44:00 compute-0 sudo[56460]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:01 compute-0 sudo[56613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upcyatntveevvrukoiderpkxzkmwyfoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161440.5711153-522-19355992327722/AnsiballZ_service_facts.py'
Jan 23 09:44:01 compute-0 sudo[56613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:01 compute-0 python3.9[56615]: ansible-service_facts Invoked
Jan 23 09:44:01 compute-0 network[56632]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 09:44:01 compute-0 network[56633]: 'network-scripts' will be removed from distribution in near future.
Jan 23 09:44:01 compute-0 network[56634]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 09:44:04 compute-0 sudo[56613]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:05 compute-0 sudo[56917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdmsfjyiujyuigdfsynwhfohyepqqewb ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769161445.019493-567-89990143388118/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769161445.019493-567-89990143388118/args'
Jan 23 09:44:05 compute-0 sudo[56917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:05 compute-0 sudo[56917]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:05 compute-0 sudo[57084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fticlsutqaqpzqssfmujfboypfjdrlxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161445.6997235-600-61387086387776/AnsiballZ_dnf.py'
Jan 23 09:44:05 compute-0 sudo[57084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:06 compute-0 python3.9[57086]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:44:08 compute-0 sudo[57084]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:10 compute-0 sudo[57237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tghibwukdiuzpmvbgudukzxtpnnlzxsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161450.073174-639-213846467360304/AnsiballZ_package_facts.py'
Jan 23 09:44:10 compute-0 sudo[57237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:10 compute-0 python3.9[57239]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 23 09:44:11 compute-0 sudo[57237]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:12 compute-0 sudo[57389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysevoknqzbkggydjyxuwacgnpximfkyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161451.8717923-669-61548908782337/AnsiballZ_stat.py'
Jan 23 09:44:12 compute-0 sudo[57389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:12 compute-0 python3.9[57391]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:12 compute-0 sudo[57389]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:12 compute-0 sudo[57514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcfiprunvodimeustzehvvsqgvzuamyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161451.8717923-669-61548908782337/AnsiballZ_copy.py'
Jan 23 09:44:12 compute-0 sudo[57514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:12 compute-0 python3.9[57516]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161451.8717923-669-61548908782337/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:12 compute-0 sudo[57514]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:13 compute-0 sudo[57668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beqqhvvxegmtidxkdtovjelzpuzmtojd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161453.2413764-714-115764130664694/AnsiballZ_stat.py'
Jan 23 09:44:13 compute-0 sudo[57668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:13 compute-0 python3.9[57670]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:13 compute-0 sudo[57668]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:14 compute-0 sudo[57793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvggsrdaqwryfxxukdlqnnnyasosvljv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161453.2413764-714-115764130664694/AnsiballZ_copy.py'
Jan 23 09:44:14 compute-0 sudo[57793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:14 compute-0 python3.9[57795]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161453.2413764-714-115764130664694/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:14 compute-0 sudo[57793]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:15 compute-0 sudo[57947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebfeifoavxjliqwuymceueoiufvulwfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161455.4698277-777-169217811461961/AnsiballZ_lineinfile.py'
Jan 23 09:44:15 compute-0 sudo[57947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:16 compute-0 python3.9[57949]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:16 compute-0 sudo[57947]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:17 compute-0 sudo[58101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylmnzhpdrdtvspjjjlbnksrxskrwtrjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161457.4429648-822-144598582674155/AnsiballZ_setup.py'
Jan 23 09:44:17 compute-0 sudo[58101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:18 compute-0 python3.9[58103]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:44:18 compute-0 sudo[58101]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:18 compute-0 sudo[58185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opiywngwlexnuoxvhgrayhvnjlhgegxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161457.4429648-822-144598582674155/AnsiballZ_systemd.py'
Jan 23 09:44:18 compute-0 sudo[58185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:19 compute-0 python3.9[58187]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:44:19 compute-0 sudo[58185]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:20 compute-0 sudo[58339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldnsdxdhwuyhfjpquuftwmaizxlzolws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161460.2133536-870-173035805043395/AnsiballZ_setup.py'
Jan 23 09:44:20 compute-0 sudo[58339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:20 compute-0 python3.9[58341]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:44:21 compute-0 sudo[58339]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:21 compute-0 sudo[58423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhqepwqsfelqorzczobhaxuzwpqnslmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161460.2133536-870-173035805043395/AnsiballZ_systemd.py'
Jan 23 09:44:21 compute-0 sudo[58423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:21 compute-0 python3.9[58425]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 09:44:21 compute-0 chronyd[791]: chronyd exiting
Jan 23 09:44:21 compute-0 systemd[1]: Stopping NTP client/server...
Jan 23 09:44:21 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 23 09:44:21 compute-0 systemd[1]: Stopped NTP client/server.
Jan 23 09:44:21 compute-0 systemd[1]: Starting NTP client/server...
Jan 23 09:44:21 compute-0 chronyd[58433]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 23 09:44:21 compute-0 chronyd[58433]: Frequency -28.544 +/- 0.222 ppm read from /var/lib/chrony/drift
Jan 23 09:44:21 compute-0 chronyd[58433]: Loaded seccomp filter (level 2)
Jan 23 09:44:21 compute-0 systemd[1]: Started NTP client/server.
Jan 23 09:44:21 compute-0 sudo[58423]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:22 compute-0 sshd-session[53484]: Connection closed by 192.168.122.30 port 45732
Jan 23 09:44:22 compute-0 sshd-session[53481]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:44:22 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 23 09:44:22 compute-0 systemd[1]: session-12.scope: Consumed 26.207s CPU time.
Jan 23 09:44:22 compute-0 systemd-logind[784]: Session 12 logged out. Waiting for processes to exit.
Jan 23 09:44:22 compute-0 systemd-logind[784]: Removed session 12.
Jan 23 09:44:29 compute-0 sshd-session[58459]: Accepted publickey for zuul from 192.168.122.30 port 55544 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:44:29 compute-0 systemd-logind[784]: New session 13 of user zuul.
Jan 23 09:44:29 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 23 09:44:29 compute-0 sshd-session[58459]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:44:29 compute-0 sudo[58612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmzikysegpdxesxvvubcmfkpjsmunzyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161469.2762814-21-280259738838519/AnsiballZ_file.py'
Jan 23 09:44:29 compute-0 sudo[58612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:29 compute-0 python3.9[58614]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:29 compute-0 sudo[58612]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:30 compute-0 sudo[58764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntqbsynkswkzcgubipzmcnuxjztzyvsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161470.1514347-57-49702145359374/AnsiballZ_stat.py'
Jan 23 09:44:30 compute-0 sudo[58764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:30 compute-0 python3.9[58766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:30 compute-0 sudo[58764]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:31 compute-0 sudo[58887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nokcypchzthorbzntnwutvufmbjvhcvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161470.1514347-57-49702145359374/AnsiballZ_copy.py'
Jan 23 09:44:31 compute-0 sudo[58887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:31 compute-0 python3.9[58889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161470.1514347-57-49702145359374/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:31 compute-0 sudo[58887]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:31 compute-0 sshd-session[58462]: Connection closed by 192.168.122.30 port 55544
Jan 23 09:44:31 compute-0 sshd-session[58459]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:44:31 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 23 09:44:31 compute-0 systemd[1]: session-13.scope: Consumed 1.567s CPU time.
Jan 23 09:44:31 compute-0 systemd-logind[784]: Session 13 logged out. Waiting for processes to exit.
Jan 23 09:44:31 compute-0 systemd-logind[784]: Removed session 13.
Jan 23 09:44:37 compute-0 sshd-session[58914]: Accepted publickey for zuul from 192.168.122.30 port 45364 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:44:37 compute-0 systemd-logind[784]: New session 14 of user zuul.
Jan 23 09:44:37 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 23 09:44:37 compute-0 sshd-session[58914]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:44:38 compute-0 python3.9[59067]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:44:39 compute-0 sudo[59221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvbivoyuomjjnhcsgdmyhfpwleejyzws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161479.2847934-54-32311887234850/AnsiballZ_file.py'
Jan 23 09:44:39 compute-0 sudo[59221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:39 compute-0 python3.9[59223]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:39 compute-0 sudo[59221]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:40 compute-0 sudo[59396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdpmdrvewkrbwjzqnwoiuklcrvnmemgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161480.1536908-78-142486544973982/AnsiballZ_stat.py'
Jan 23 09:44:40 compute-0 sudo[59396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:40 compute-0 python3.9[59398]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:40 compute-0 sudo[59396]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:41 compute-0 sudo[59519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdcyqlnxffnnpaitqcqkodhlobyyjmdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161480.1536908-78-142486544973982/AnsiballZ_copy.py'
Jan 23 09:44:41 compute-0 sudo[59519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:41 compute-0 python3.9[59521]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769161480.1536908-78-142486544973982/.source.json _original_basename=.4uzxub2b follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:41 compute-0 sudo[59519]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:42 compute-0 sudo[59671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdunchrhjhhdipvzpystbjewfxqlcjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161482.013251-147-25359411756363/AnsiballZ_stat.py'
Jan 23 09:44:42 compute-0 sudo[59671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:42 compute-0 python3.9[59673]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:42 compute-0 sudo[59671]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:42 compute-0 sudo[59794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmmoeqaudrihjqfqxaabyrvohzmmrxnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161482.013251-147-25359411756363/AnsiballZ_copy.py'
Jan 23 09:44:42 compute-0 sudo[59794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:43 compute-0 python3.9[59796]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161482.013251-147-25359411756363/.source _original_basename=.zyfvcekj follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:43 compute-0 sudo[59794]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:43 compute-0 sudo[59946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqfavmwduzlgayfvkryytnjzuwwcjwzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161483.374071-195-80603604424410/AnsiballZ_file.py'
Jan 23 09:44:43 compute-0 sudo[59946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:43 compute-0 python3.9[59948]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:44:43 compute-0 sudo[59946]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:44 compute-0 sudo[60098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-julmdlsaiyhiwualeqkggdfdcdaxsjhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161484.1166263-219-42305925032372/AnsiballZ_stat.py'
Jan 23 09:44:44 compute-0 sudo[60098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:44 compute-0 python3.9[60100]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:44 compute-0 sudo[60098]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:45 compute-0 sudo[60221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blazddizbmevjdxeqfojpzpfclkvaufc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161484.1166263-219-42305925032372/AnsiballZ_copy.py'
Jan 23 09:44:45 compute-0 sudo[60221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:45 compute-0 python3.9[60223]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769161484.1166263-219-42305925032372/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:44:45 compute-0 sudo[60221]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:45 compute-0 sudo[60373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkrxdmakuzvnmuvyvrwqvtypcbjhyrxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161485.4085424-219-14486975010574/AnsiballZ_stat.py'
Jan 23 09:44:45 compute-0 sudo[60373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:46 compute-0 python3.9[60375]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:46 compute-0 sudo[60373]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:46 compute-0 sudo[60496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iinhxjafbrhldzrpdfxfjlqbipxujiqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161485.4085424-219-14486975010574/AnsiballZ_copy.py'
Jan 23 09:44:46 compute-0 sudo[60496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:46 compute-0 python3.9[60498]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769161485.4085424-219-14486975010574/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:44:46 compute-0 sudo[60496]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:47 compute-0 sudo[60648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pehgaacsqwgyzfzfhirszkjemqowcazp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161487.0876822-306-13610210939846/AnsiballZ_file.py'
Jan 23 09:44:47 compute-0 sudo[60648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:47 compute-0 python3.9[60650]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:47 compute-0 sudo[60648]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:48 compute-0 sudo[60800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klcsxhfdvtxosxzdwxpaqxaugorkdmcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161487.751539-330-260739586086856/AnsiballZ_stat.py'
Jan 23 09:44:48 compute-0 sudo[60800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:48 compute-0 python3.9[60802]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:48 compute-0 sudo[60800]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:48 compute-0 sudo[60923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kznzztardgtcgbeoidhucdfwanhkmsih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161487.751539-330-260739586086856/AnsiballZ_copy.py'
Jan 23 09:44:48 compute-0 sudo[60923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:48 compute-0 python3.9[60925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161487.751539-330-260739586086856/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:48 compute-0 sudo[60923]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:49 compute-0 sudo[61075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scnkrstufeddeullxbazftirsjlbkrwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161488.963893-375-239537602483349/AnsiballZ_stat.py'
Jan 23 09:44:49 compute-0 sudo[61075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:49 compute-0 python3.9[61077]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:49 compute-0 sudo[61075]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:49 compute-0 sudo[61198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blawipsxjaautuilwiberzowimwimhfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161488.963893-375-239537602483349/AnsiballZ_copy.py'
Jan 23 09:44:49 compute-0 sudo[61198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:49 compute-0 python3.9[61200]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161488.963893-375-239537602483349/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:49 compute-0 sudo[61198]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:50 compute-0 sudo[61350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etwvvdzwonvcjvblnjojcbvryenevypq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161490.167216-420-275291727051885/AnsiballZ_systemd.py'
Jan 23 09:44:50 compute-0 sudo[61350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:51 compute-0 python3.9[61352]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:44:51 compute-0 systemd[1]: Reloading.
Jan 23 09:44:51 compute-0 systemd-sysv-generator[61383]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:44:51 compute-0 systemd-rc-local-generator[61380]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:44:51 compute-0 systemd[1]: Reloading.
Jan 23 09:44:51 compute-0 systemd-rc-local-generator[61415]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:44:51 compute-0 systemd-sysv-generator[61418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:44:51 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 23 09:44:51 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 23 09:44:51 compute-0 sudo[61350]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:52 compute-0 sudo[61578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyebbsfcfgzhjdlnqvexahytiutczwgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161491.938888-444-268605928350703/AnsiballZ_stat.py'
Jan 23 09:44:52 compute-0 sudo[61578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:52 compute-0 python3.9[61580]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:52 compute-0 sudo[61578]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:52 compute-0 sudo[61701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hldylrtykyuyybzjjhwxiwtvzqwzqpch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161491.938888-444-268605928350703/AnsiballZ_copy.py'
Jan 23 09:44:52 compute-0 sudo[61701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:52 compute-0 python3.9[61703]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161491.938888-444-268605928350703/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:53 compute-0 sudo[61701]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:53 compute-0 sudo[61853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbgvqkviitnojiehjlidlurpnbtvynhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161493.2481594-489-242258888418888/AnsiballZ_stat.py'
Jan 23 09:44:53 compute-0 sudo[61853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:53 compute-0 python3.9[61855]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:44:53 compute-0 sudo[61853]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:54 compute-0 sudo[61976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmrlosffndjmksmwjznuaobkciplcozx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161493.2481594-489-242258888418888/AnsiballZ_copy.py'
Jan 23 09:44:54 compute-0 sudo[61976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:54 compute-0 python3.9[61978]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161493.2481594-489-242258888418888/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:44:54 compute-0 sudo[61976]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:55 compute-0 sudo[62128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxixajafrksigrqhrxksrurczfnvzgid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161494.760292-534-68165405218044/AnsiballZ_systemd.py'
Jan 23 09:44:55 compute-0 sudo[62128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:44:55 compute-0 python3.9[62130]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:44:55 compute-0 systemd[1]: Reloading.
Jan 23 09:44:55 compute-0 systemd-sysv-generator[62160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:44:55 compute-0 systemd-rc-local-generator[62155]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:44:55 compute-0 systemd[1]: Reloading.
Jan 23 09:44:55 compute-0 systemd-rc-local-generator[62194]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:44:55 compute-0 systemd-sysv-generator[62198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:44:55 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 09:44:55 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 09:44:55 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 09:44:55 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 09:44:55 compute-0 sudo[62128]: pam_unix(sudo:session): session closed for user root
Jan 23 09:44:56 compute-0 python3.9[62356]: ansible-ansible.builtin.service_facts Invoked
Jan 23 09:44:56 compute-0 network[62373]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 09:44:56 compute-0 network[62374]: 'network-scripts' will be removed from distribution in near future.
Jan 23 09:44:56 compute-0 network[62375]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 09:45:01 compute-0 sudo[62635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufmiskfvphlkzvxiuklunhyqdonccqnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161500.8820677-582-23271859418531/AnsiballZ_systemd.py'
Jan 23 09:45:01 compute-0 sudo[62635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:01 compute-0 python3.9[62637]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:45:01 compute-0 systemd[1]: Reloading.
Jan 23 09:45:01 compute-0 systemd-rc-local-generator[62665]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:45:01 compute-0 systemd-sysv-generator[62669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:45:01 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 23 09:45:02 compute-0 iptables.init[62678]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 23 09:45:02 compute-0 iptables.init[62678]: iptables: Flushing firewall rules: [  OK  ]
Jan 23 09:45:02 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 23 09:45:02 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 23 09:45:02 compute-0 sudo[62635]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:02 compute-0 sudo[62873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olputaamlwvnpnkeosoghcnjyxclhccu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161502.369071-582-254624759398861/AnsiballZ_systemd.py'
Jan 23 09:45:02 compute-0 sudo[62873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:02 compute-0 python3.9[62875]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:45:03 compute-0 sudo[62873]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:03 compute-0 sudo[63027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bobwyngfoyuhaogtnmblccqotgqpwdlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161503.2985044-630-185182630784478/AnsiballZ_systemd.py'
Jan 23 09:45:03 compute-0 sudo[63027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:03 compute-0 python3.9[63029]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:45:03 compute-0 systemd[1]: Reloading.
Jan 23 09:45:04 compute-0 systemd-rc-local-generator[63058]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:45:04 compute-0 systemd-sysv-generator[63061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:45:04 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 23 09:45:04 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 23 09:45:04 compute-0 sudo[63027]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:05 compute-0 sudo[63219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pepbjzodknxmxdfqozauaumzyognfasb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161504.5600235-654-62467437158828/AnsiballZ_command.py'
Jan 23 09:45:05 compute-0 sudo[63219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:05 compute-0 python3.9[63221]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:45:05 compute-0 sudo[63219]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:06 compute-0 sudo[63372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvijrnogsaamlceursdtxsublbngjlbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161506.1423407-696-72165367258105/AnsiballZ_stat.py'
Jan 23 09:45:06 compute-0 sudo[63372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:06 compute-0 python3.9[63374]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:06 compute-0 sudo[63372]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:07 compute-0 sudo[63497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sewlqhsuoqjpaatzqdywmmqytmwkkrrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161506.1423407-696-72165367258105/AnsiballZ_copy.py'
Jan 23 09:45:07 compute-0 sudo[63497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:07 compute-0 python3.9[63499]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161506.1423407-696-72165367258105/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:07 compute-0 sudo[63497]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:07 compute-0 sudo[63650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbiwhsoojnjzfusqfaqcphfmyskhpadm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161507.4279375-741-124145883787907/AnsiballZ_systemd.py'
Jan 23 09:45:07 compute-0 sudo[63650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:08 compute-0 python3.9[63652]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 09:45:08 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 23 09:45:08 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 23 09:45:08 compute-0 sshd[1004]: Received SIGHUP; restarting.
Jan 23 09:45:08 compute-0 sshd[1004]: Server listening on 0.0.0.0 port 22.
Jan 23 09:45:08 compute-0 sshd[1004]: Server listening on :: port 22.
Jan 23 09:45:08 compute-0 sudo[63650]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:08 compute-0 sudo[63806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anrponxwgldyjmdanqlyzoocwhnyxaow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161508.2807915-765-208944570490430/AnsiballZ_file.py'
Jan 23 09:45:08 compute-0 sudo[63806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:08 compute-0 python3.9[63808]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:08 compute-0 sudo[63806]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:09 compute-0 sudo[63958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkupcllvrkarrdnjctyqawumvhgrnovq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161508.9994912-789-132950859564875/AnsiballZ_stat.py'
Jan 23 09:45:09 compute-0 sudo[63958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:09 compute-0 python3.9[63960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:09 compute-0 sudo[63958]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:09 compute-0 sudo[64081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbynatosbdnvmqbeffprvltdvstqrhmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161508.9994912-789-132950859564875/AnsiballZ_copy.py'
Jan 23 09:45:09 compute-0 sudo[64081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:10 compute-0 python3.9[64083]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161508.9994912-789-132950859564875/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:10 compute-0 sudo[64081]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:10 compute-0 sudo[64233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxvxiktpjembxbjqfevqawgirdeqktir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161510.5173852-843-259890373373420/AnsiballZ_timezone.py'
Jan 23 09:45:10 compute-0 sudo[64233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:11 compute-0 python3.9[64235]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 09:45:11 compute-0 systemd[1]: Starting Time & Date Service...
Jan 23 09:45:11 compute-0 systemd[1]: Started Time & Date Service.
Jan 23 09:45:11 compute-0 sudo[64233]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:11 compute-0 sudo[64389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-narzuigpakhagapjvplpkahjbtzobfqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161511.5403638-870-153784052435621/AnsiballZ_file.py'
Jan 23 09:45:11 compute-0 sudo[64389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:12 compute-0 python3.9[64391]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:12 compute-0 sudo[64389]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:13 compute-0 sudo[64541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teermemusodxkqqistxiingudqszrbiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161513.0880973-894-137566919809911/AnsiballZ_stat.py'
Jan 23 09:45:13 compute-0 sudo[64541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:13 compute-0 python3.9[64543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:13 compute-0 sudo[64541]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:13 compute-0 sudo[64664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjgzajgsrfrwzkilrkmnslhiwlbhbnsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161513.0880973-894-137566919809911/AnsiballZ_copy.py'
Jan 23 09:45:13 compute-0 sudo[64664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:14 compute-0 python3.9[64666]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161513.0880973-894-137566919809911/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:14 compute-0 sudo[64664]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:14 compute-0 sudo[64816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dusrjyraatlyyzzujuvydzrwiwgvnfxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161514.3579543-939-106415082767025/AnsiballZ_stat.py'
Jan 23 09:45:14 compute-0 sudo[64816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:14 compute-0 python3.9[64818]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:14 compute-0 sudo[64816]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:15 compute-0 sudo[64939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwbigenrdjrpgrhggcabxcmzeslkiquq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161514.3579543-939-106415082767025/AnsiballZ_copy.py'
Jan 23 09:45:15 compute-0 sudo[64939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:15 compute-0 python3.9[64941]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769161514.3579543-939-106415082767025/.source.yaml _original_basename=.6hk4xqob follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:15 compute-0 sudo[64939]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:15 compute-0 sudo[65091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yciereqmbcsevcplsmgeyaeubzfnnrdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161515.5763707-984-146953590239305/AnsiballZ_stat.py'
Jan 23 09:45:15 compute-0 sudo[65091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:16 compute-0 python3.9[65093]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:16 compute-0 sudo[65091]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:16 compute-0 sudo[65214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiyzujqxfgdtfmyppgxuknrpmpukopkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161515.5763707-984-146953590239305/AnsiballZ_copy.py'
Jan 23 09:45:16 compute-0 sudo[65214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:16 compute-0 python3.9[65216]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161515.5763707-984-146953590239305/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:16 compute-0 sudo[65214]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:17 compute-0 sudo[65366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtdxcuqaqveovkyqujkjvrzadyeyobdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161517.113321-1029-143393249536270/AnsiballZ_command.py'
Jan 23 09:45:17 compute-0 sudo[65366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:17 compute-0 python3.9[65368]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:45:17 compute-0 sudo[65366]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:18 compute-0 sudo[65519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfdfmjcrluybleaxeyimjqvzqwgtlpge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161517.8870823-1053-764328561047/AnsiballZ_command.py'
Jan 23 09:45:18 compute-0 sudo[65519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:18 compute-0 python3.9[65521]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:45:18 compute-0 sudo[65519]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:19 compute-0 sudo[65672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhbkivucibitwwcpletkowrlegqnyiev ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769161518.5964952-1077-124484384695836/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 09:45:19 compute-0 sudo[65672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:19 compute-0 python3[65674]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 09:45:19 compute-0 sudo[65672]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:19 compute-0 sudo[65824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjpulnxfmluojdbhfzpwbeyhjabzestw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161519.4668474-1101-197946870169058/AnsiballZ_stat.py'
Jan 23 09:45:19 compute-0 sudo[65824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:19 compute-0 python3.9[65826]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:19 compute-0 sudo[65824]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:20 compute-0 sudo[65947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncojxgcvqioqeqgtcweritfvtttnejs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161519.4668474-1101-197946870169058/AnsiballZ_copy.py'
Jan 23 09:45:20 compute-0 sudo[65947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:20 compute-0 python3.9[65949]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161519.4668474-1101-197946870169058/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:20 compute-0 sudo[65947]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:21 compute-0 sudo[66099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjpsqfgqmbmebupugfyelmctyimaylt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161520.7415159-1146-244610242283204/AnsiballZ_stat.py'
Jan 23 09:45:21 compute-0 sudo[66099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:21 compute-0 python3.9[66101]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:21 compute-0 sudo[66099]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:21 compute-0 sudo[66222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clbgbrwhghhdcwityegctlhynnkownnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161520.7415159-1146-244610242283204/AnsiballZ_copy.py'
Jan 23 09:45:21 compute-0 sudo[66222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:21 compute-0 python3.9[66224]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161520.7415159-1146-244610242283204/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:21 compute-0 sudo[66222]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:22 compute-0 sudo[66374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziqtgbzsaincyfmtaznhhtztyrmwznat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161521.9933994-1191-111057364707098/AnsiballZ_stat.py'
Jan 23 09:45:22 compute-0 sudo[66374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:22 compute-0 python3.9[66376]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:22 compute-0 sudo[66374]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:22 compute-0 sudo[66497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kreifxgaabenqsyhwzgvfpoqruuryevb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161521.9933994-1191-111057364707098/AnsiballZ_copy.py'
Jan 23 09:45:22 compute-0 sudo[66497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:23 compute-0 python3.9[66499]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161521.9933994-1191-111057364707098/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:23 compute-0 sudo[66497]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:23 compute-0 sudo[66649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwprqikftwscovwwnfoimsfmhmkgoeqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161523.2979338-1236-190695494946544/AnsiballZ_stat.py'
Jan 23 09:45:23 compute-0 sudo[66649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:23 compute-0 python3.9[66651]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:23 compute-0 sudo[66649]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:24 compute-0 sudo[66772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aucdusdmtvbvhbnymvvqfwiqpgtmraoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161523.2979338-1236-190695494946544/AnsiballZ_copy.py'
Jan 23 09:45:24 compute-0 sudo[66772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:24 compute-0 python3.9[66774]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161523.2979338-1236-190695494946544/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:24 compute-0 sudo[66772]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:24 compute-0 sudo[66924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnfurisgaovcdrlzjdyehorchtqqfooc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161524.5885196-1281-189598321061327/AnsiballZ_stat.py'
Jan 23 09:45:24 compute-0 sudo[66924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:25 compute-0 python3.9[66926]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:45:25 compute-0 sudo[66924]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:25 compute-0 sudo[67047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avgthszcytxhymekbkfljflqfybmpucd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161524.5885196-1281-189598321061327/AnsiballZ_copy.py'
Jan 23 09:45:25 compute-0 sudo[67047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:25 compute-0 python3.9[67049]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769161524.5885196-1281-189598321061327/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:25 compute-0 sudo[67047]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:26 compute-0 sudo[67199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnzzgtkynrxomjmdxltgxykcotluzczz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161525.9271667-1326-60256719280483/AnsiballZ_file.py'
Jan 23 09:45:26 compute-0 sudo[67199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:26 compute-0 python3.9[67201]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:26 compute-0 sudo[67199]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:26 compute-0 sudo[67351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdghszgkoxcgvfylvchvqiklmrwouuqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161526.6201386-1350-20535874615995/AnsiballZ_command.py'
Jan 23 09:45:26 compute-0 sudo[67351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:27 compute-0 python3.9[67353]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:45:27 compute-0 sudo[67351]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:27 compute-0 sudo[67510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyzumvxlmsxrvusiumffsbkpvkhpqpxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161527.4557803-1374-102044083649959/AnsiballZ_blockinfile.py'
Jan 23 09:45:27 compute-0 sudo[67510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:28 compute-0 python3.9[67512]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:28 compute-0 sudo[67510]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:28 compute-0 sudo[67663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itcfjxyuidwwbszhcsdbulbsmexgmxnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161528.391813-1401-130135182365091/AnsiballZ_file.py'
Jan 23 09:45:28 compute-0 sudo[67663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:28 compute-0 python3.9[67665]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:28 compute-0 sudo[67663]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:29 compute-0 sudo[67815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvrohtpjjgbxgperekgdwaefxrhatiyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161529.0169363-1401-200905219036008/AnsiballZ_file.py'
Jan 23 09:45:29 compute-0 sudo[67815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:29 compute-0 python3.9[67817]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:29 compute-0 sudo[67815]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:30 compute-0 sudo[67967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdaxfjqdpwjjacdxatndooihgewewjfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161529.955423-1446-5279756016989/AnsiballZ_mount.py'
Jan 23 09:45:30 compute-0 sudo[67967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:30 compute-0 python3.9[67969]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 09:45:30 compute-0 sudo[67967]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:31 compute-0 sudo[68120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxniuftbjcfyyorsoldqaskyxoyzweys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161530.891996-1446-137925641551579/AnsiballZ_mount.py'
Jan 23 09:45:31 compute-0 sudo[68120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:31 compute-0 python3.9[68122]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 09:45:31 compute-0 sudo[68120]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:31 compute-0 sshd-session[58917]: Connection closed by 192.168.122.30 port 45364
Jan 23 09:45:31 compute-0 sshd-session[58914]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:45:31 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 23 09:45:31 compute-0 systemd[1]: session-14.scope: Consumed 35.660s CPU time.
Jan 23 09:45:31 compute-0 systemd-logind[784]: Session 14 logged out. Waiting for processes to exit.
Jan 23 09:45:31 compute-0 systemd-logind[784]: Removed session 14.
Jan 23 09:45:37 compute-0 sshd-session[68148]: Accepted publickey for zuul from 192.168.122.30 port 44130 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:45:37 compute-0 systemd-logind[784]: New session 15 of user zuul.
Jan 23 09:45:37 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 23 09:45:37 compute-0 sshd-session[68148]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:45:38 compute-0 sudo[68301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrxdydupwoozkqzpkstktmxhwsdzhwpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161537.5640028-18-202264657692920/AnsiballZ_tempfile.py'
Jan 23 09:45:38 compute-0 sudo[68301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:38 compute-0 python3.9[68303]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 23 09:45:38 compute-0 sudo[68301]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:38 compute-0 sudo[68453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zznyftviolzhtyftrlnhupdcvusuiebt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161538.3841634-54-113095103404990/AnsiballZ_stat.py'
Jan 23 09:45:38 compute-0 sudo[68453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:39 compute-0 python3.9[68455]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:45:39 compute-0 sudo[68453]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:39 compute-0 sudo[68605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oghjygcqlazszrqwclfkwxixrxrsmjoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161539.3588128-84-253607558097425/AnsiballZ_setup.py'
Jan 23 09:45:39 compute-0 sudo[68605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:40 compute-0 python3.9[68607]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:45:40 compute-0 sudo[68605]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:41 compute-0 sudo[68757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxhagjiylgbdxhyplofepfhkpaejbdkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161540.585129-109-135746606944475/AnsiballZ_blockinfile.py'
Jan 23 09:45:41 compute-0 sudo[68757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:41 compute-0 python3.9[68759]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+cj2so8SS29oYZ1K+7e02qi6fVkGXJzGMkIN9mgJPLCBtQ6vpBYEObTZZXuMIHhdiMUAp6RDjs11OXDkAB9R7e2ncjMKn7J2EHbmceT7rNq9L0w+QaLKFxl+xdJQ9QtO9ioNgJFXXQZt/IOeE8S4I5yhEM5jn+YEW0LPbp99Wz1d1Ob4GI1t0hCEv/4ayC3nRIXkuIhl7mrV0s22F8NE8f0hZZKaw1u8xmmpbD8ZVBsC6cxWE3kIQBmHu8q9tylaZjLsjGxBDUF9ko3bxeppvLPDMem89VLQCWbgmOHl5ZIPsyNglusTIBUp8uA7g+Agz1uMojClMHnsZl68WjbCAVcRA9y/UgXphGyEYZCUJMv8CjYKzxriyHALZl6YFSyC5ELlEAxL8fyTwtXhQ1+e/lI9Ak3n4suC6JyH0NQ27MPIf7riyUFJLw9lZaDerZOkvI7/Y2PfRvdfyZ57g/xgGeLY0Ch30SFVC04lNXIpsOWbLBOg0BMP9ZiciAYAF9Yc=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIreWuVcekgp7kF5pU+4TIKLHZyhuqd4Ly312ExEA5EG
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJWfXOTsTXqDhdGhW7VcUXsYqCS7TzCPyaa9/dA9e0xKjnni1/GRM8FdYXWYbGsNnBQFWk3/pXD6sj3jKzK34AM=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWbrXZxuAw0n/xJmOvWW/Qbg53ya2CuJKzcHA+OvDpHLHGxkEuiUhwKvqUbfSTzn0o1M00OYITJIvZVINGRtQC7hGvBPWLVBON097mcmnju857I72U3dGdvGhnEUHyrglCV+xSkafQTTlnY9B59EKImUs/kiwRy3cYDWkCgthJgiPA4QSw6WrzaqpY2ET+7n+yY31EOagGA3ufW43qFbHX4diFuXpS1I1PLvvA4KINlMlsFcyR29j4nQk/vb5hMpLmBOlfVH16CXZC98a0ltp9ib7F3e1Wjdogj92kxwfQMYIeQEBp11Tc/PY5U90J51oyk8xYOKfsP3+r9yczmfRDjwR3+tzUMKyZYAsKQVcOGQC7x9sEXg3mBeXRVrlIVZFMuNVcYq4CY40fDIybcI25GxgRbQR7ZUWODG1SL7RF02Z+LQB6APXkzxdQUWLWPryj/EtOgnHQ1I0+BJTWrqGkKbSj41jhRTfS+MZvRXAJ+fNyZFhpkHo54DrCii4cbyM=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRPkwTcFVg/dIKRq29iWBfkoVFqIQ1pXOCPxfcGWRFF
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGf/hJ2dg/PRwojw63FLyKqua+ChKP+2bc7Eb0p70H6ve1elFVeY8lVRXx33JWc2m/XfgSWPNcUs9zBG8QcFVak=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDA/6JnQZ3CFC7xgv4DrvdZizVbVnsolKcWkvqzGu1hFHGmOEb7ehbxGPHBnp2N9iRf13H12EI0qNI6A2f44V0oXE3SP+fpJ6PVYQRQpKqTEiweqZaHEyYE2FnKy0HDQisg5hwr1egYLjGXChdkyqWSokL1LqaCyD2+EcOzUvC/GuVQ7eQnQBIGBpYAnNzS/64KKOZ0+0soOPJGxVCma6JN/2GcCunX6j3HmkOOQeuEFETXfUPHh1ylu2+3yINl34ERJN5YwgR/S+BKENOsJTu5XkYTCvc90CuvfkoF9K5Y2yE5nKwZaSf7n2SbUPil2Zph4l7opsd5IKxi6k2mVzw/CO2NHr136BZ06+sKXytDgorWqWzqnci8zfxeYF3D7q7AXD+IDVMP5T6op93oS2enAQFHG1vTLB0otQqnxUgNANbJkrKgXAS8G8I1m2sPz+qOFuuZa2/nqhzrd6/DEur5VoW6n9c/OcrbfapLEzD1jQDmsQI7oZkT++dt3Ogb3Vk=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIII1sLqY7Nqi1A3CKXLokfn1vrns/lK1gUkDNSlbek2o
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM9QZXHUsthFMKA5Si4Htl7MIwK0G4VAltQgbo39JJHrgD7h27U1jbnuJQ1S2bBX8FMSkqf5TPmM7Gr9QOATO+4=
                                             create=True mode=0644 path=/tmp/ansible.avdjiu43 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:41 compute-0 sudo[68757]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:41 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 09:45:41 compute-0 sudo[68912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlpsywzglzwhbupdisnpaiqooajiogjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161541.4079742-133-46645928093674/AnsiballZ_command.py'
Jan 23 09:45:41 compute-0 sudo[68912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:42 compute-0 python3.9[68914]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.avdjiu43' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:45:42 compute-0 sudo[68912]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:42 compute-0 sudo[69066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjsafjkptnoxkbxnlktfirgnrwvzwxuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161542.2752523-157-78882560826548/AnsiballZ_file.py'
Jan 23 09:45:42 compute-0 sudo[69066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:42 compute-0 python3.9[69068]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.avdjiu43 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:42 compute-0 sudo[69066]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:43 compute-0 sshd-session[68151]: Connection closed by 192.168.122.30 port 44130
Jan 23 09:45:43 compute-0 sshd-session[68148]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:45:43 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 23 09:45:43 compute-0 systemd[1]: session-15.scope: Consumed 3.303s CPU time.
Jan 23 09:45:43 compute-0 systemd-logind[784]: Session 15 logged out. Waiting for processes to exit.
Jan 23 09:45:43 compute-0 systemd-logind[784]: Removed session 15.
Jan 23 09:45:49 compute-0 sshd-session[69093]: Accepted publickey for zuul from 192.168.122.30 port 52698 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:45:49 compute-0 systemd-logind[784]: New session 16 of user zuul.
Jan 23 09:45:49 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 23 09:45:49 compute-0 sshd-session[69093]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:45:50 compute-0 python3.9[69246]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:45:51 compute-0 sudo[69400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtqkevxyyvkgqddgdfzcxsvusxfuphsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161551.025119-51-47397612279631/AnsiballZ_systemd.py'
Jan 23 09:45:51 compute-0 sudo[69400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:52 compute-0 python3.9[69402]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 09:45:52 compute-0 sudo[69400]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:52 compute-0 sudo[69554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jahixmiicrtjcnmaufvqmwskauwdxvyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161552.37227-75-195160366009712/AnsiballZ_systemd.py'
Jan 23 09:45:52 compute-0 sudo[69554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:52 compute-0 python3.9[69556]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 09:45:53 compute-0 sudo[69554]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:53 compute-0 sudo[69707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inmlngamompythwlfwwzzajcoolqwrhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161553.2569416-102-78694606194150/AnsiballZ_command.py'
Jan 23 09:45:53 compute-0 sudo[69707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:53 compute-0 python3.9[69709]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:45:53 compute-0 sudo[69707]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:54 compute-0 sudo[69860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hawxdicexfmuhickxdromabdoqewgfaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161554.249576-126-269811481628396/AnsiballZ_stat.py'
Jan 23 09:45:54 compute-0 sudo[69860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:54 compute-0 python3.9[69862]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:45:54 compute-0 sudo[69860]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:55 compute-0 sudo[70015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aejjmjiczdpcmjyqzauseogsuvqriyip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161555.0591319-150-160605827207175/AnsiballZ_command.py'
Jan 23 09:45:55 compute-0 sudo[70015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:55 compute-0 python3.9[70017]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:45:55 compute-0 sudo[70015]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:56 compute-0 sudo[70170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrchpubmzazvxtunyiquqhpbptsogack ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161555.8301897-174-250516382160263/AnsiballZ_file.py'
Jan 23 09:45:56 compute-0 sudo[70170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:45:56 compute-0 python3.9[70172]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:45:56 compute-0 sudo[70170]: pam_unix(sudo:session): session closed for user root
Jan 23 09:45:57 compute-0 sshd-session[69096]: Connection closed by 192.168.122.30 port 52698
Jan 23 09:45:57 compute-0 sshd-session[69093]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:45:57 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 23 09:45:57 compute-0 systemd[1]: session-16.scope: Consumed 4.560s CPU time.
Jan 23 09:45:57 compute-0 systemd-logind[784]: Session 16 logged out. Waiting for processes to exit.
Jan 23 09:45:57 compute-0 systemd-logind[784]: Removed session 16.
Jan 23 09:46:02 compute-0 sshd-session[70197]: Accepted publickey for zuul from 192.168.122.30 port 39618 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:46:02 compute-0 systemd-logind[784]: New session 17 of user zuul.
Jan 23 09:46:02 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 23 09:46:02 compute-0 sshd-session[70197]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:46:03 compute-0 python3.9[70350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:46:04 compute-0 sudo[70504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqjqkmzmosppqlkjdkocqochsrusnrma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161564.0564551-57-153576538426436/AnsiballZ_setup.py'
Jan 23 09:46:04 compute-0 sudo[70504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:04 compute-0 python3.9[70506]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:46:05 compute-0 sudo[70504]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:06 compute-0 sudo[70588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uszfsyqycerylddpqpsxeysezdgjnhhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769161564.0564551-57-153576538426436/AnsiballZ_dnf.py'
Jan 23 09:46:06 compute-0 sudo[70588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:06 compute-0 python3.9[70590]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 09:46:07 compute-0 sudo[70588]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:08 compute-0 python3.9[70741]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:46:10 compute-0 python3.9[70892]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 09:46:10 compute-0 python3.9[71042]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:46:11 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 09:46:11 compute-0 python3.9[71194]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:46:12 compute-0 sshd-session[70200]: Connection closed by 192.168.122.30 port 39618
Jan 23 09:46:12 compute-0 sshd-session[70197]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:46:12 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 23 09:46:12 compute-0 systemd[1]: session-17.scope: Consumed 5.867s CPU time.
Jan 23 09:46:12 compute-0 systemd-logind[784]: Session 17 logged out. Waiting for processes to exit.
Jan 23 09:46:12 compute-0 systemd-logind[784]: Removed session 17.
Jan 23 09:46:20 compute-0 sshd-session[71219]: Accepted publickey for zuul from 38.129.56.17 port 60458 ssh2: RSA SHA256:/TrmfiPCpRhp7iDH6L+XY56Icv2RRStSYrCVh8OnXTQ
Jan 23 09:46:20 compute-0 systemd-logind[784]: New session 18 of user zuul.
Jan 23 09:46:20 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 23 09:46:20 compute-0 sshd-session[71219]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:46:21 compute-0 sudo[71295]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctnvwuawtanvqvpqchapvjlxzvyjypmu ; /usr/bin/python3'
Jan 23 09:46:21 compute-0 sudo[71295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:21 compute-0 useradd[71299]: new group: name=ceph-admin, GID=42478
Jan 23 09:46:21 compute-0 useradd[71299]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 23 09:46:21 compute-0 sudo[71295]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:21 compute-0 sudo[71381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdhpkizwxesoqvsnewckedgssbfcwjxy ; /usr/bin/python3'
Jan 23 09:46:21 compute-0 sudo[71381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:21 compute-0 sudo[71381]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:22 compute-0 sudo[71454]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhqjbykdssqwmtpiysvpcshdkosmebar ; /usr/bin/python3'
Jan 23 09:46:22 compute-0 sudo[71454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:22 compute-0 sudo[71454]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:22 compute-0 sudo[71504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whpsjqpnjmmforadwwgrprijgxppuoag ; /usr/bin/python3'
Jan 23 09:46:22 compute-0 sudo[71504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:23 compute-0 sudo[71504]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:23 compute-0 sudo[71530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhahdibxbxkhutlwqhpjtslozygcmtgf ; /usr/bin/python3'
Jan 23 09:46:23 compute-0 sudo[71530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:23 compute-0 sudo[71530]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:23 compute-0 sudo[71556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrimzzyguandoeqibcoshrmtwatwgnnf ; /usr/bin/python3'
Jan 23 09:46:23 compute-0 sudo[71556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:23 compute-0 sudo[71556]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:24 compute-0 sudo[71582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fghazzzzhcgyiylhatxlxgcuyzwkklzu ; /usr/bin/python3'
Jan 23 09:46:24 compute-0 sudo[71582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:24 compute-0 sudo[71582]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:24 compute-0 sudo[71660]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeenubwspzunghahnjigixpiokvhjzem ; /usr/bin/python3'
Jan 23 09:46:24 compute-0 sudo[71660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:24 compute-0 sudo[71660]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:24 compute-0 sudo[71733]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlsayoppqvxmssghlvkqzmqbrsuzibzr ; /usr/bin/python3'
Jan 23 09:46:24 compute-0 sudo[71733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:25 compute-0 sudo[71733]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:25 compute-0 sudo[71835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgyoxlghycovdwhkhwhprglwvueisrda ; /usr/bin/python3'
Jan 23 09:46:25 compute-0 sudo[71835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:25 compute-0 sudo[71835]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:25 compute-0 sudo[71908]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gorpwsihicyjikhdzpqruevhbutjtaib ; /usr/bin/python3'
Jan 23 09:46:25 compute-0 sudo[71908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:26 compute-0 sudo[71908]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:26 compute-0 sudo[71958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uexlprwonwfqcxtmdjxiudgkwvksfuot ; /usr/bin/python3'
Jan 23 09:46:26 compute-0 sudo[71958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:26 compute-0 python3[71960]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:46:27 compute-0 sudo[71958]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:28 compute-0 sudo[72053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svdidjjwpzyrxmxlotteuvyvzeqjwfiu ; /usr/bin/python3'
Jan 23 09:46:28 compute-0 sudo[72053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:28 compute-0 python3[72055]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 09:46:30 compute-0 sudo[72053]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:30 compute-0 sudo[72080]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuxxzubadipesitchbslpoeyvavszevt ; /usr/bin/python3'
Jan 23 09:46:30 compute-0 sudo[72080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:30 compute-0 python3[72082]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 09:46:30 compute-0 sudo[72080]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:30 compute-0 sudo[72106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtsuzpprmjjbpzvteouckhjfdxwmeqtd ; /usr/bin/python3'
Jan 23 09:46:30 compute-0 sudo[72106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:30 compute-0 python3[72108]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:46:30 compute-0 kernel: loop: module loaded
Jan 23 09:46:30 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 23 09:46:31 compute-0 sudo[72106]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:31 compute-0 sudo[72142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oarqwcueqdelqubgtfzfdfwejbdmgfbx ; /usr/bin/python3'
Jan 23 09:46:31 compute-0 sudo[72142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:31 compute-0 chronyd[58433]: Selected source 167.160.187.179 (pool.ntp.org)
Jan 23 09:46:31 compute-0 python3[72144]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:46:31 compute-0 lvm[72147]: PV /dev/loop3 not used.
Jan 23 09:46:31 compute-0 lvm[72149]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:46:31 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 23 09:46:31 compute-0 lvm[72156]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 23 09:46:31 compute-0 lvm[72159]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:46:31 compute-0 lvm[72159]: VG ceph_vg0 finished
Jan 23 09:46:31 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 23 09:46:31 compute-0 sudo[72142]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:32 compute-0 sudo[72235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzxxbttozqnsnevpzhivdzizuquqevho ; /usr/bin/python3'
Jan 23 09:46:32 compute-0 sudo[72235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:32 compute-0 python3[72237]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:46:32 compute-0 sudo[72235]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:32 compute-0 sudo[72308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfschqxhzzhmkgzhdftyfopaeatjeqhj ; /usr/bin/python3'
Jan 23 09:46:32 compute-0 sudo[72308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:32 compute-0 python3[72310]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769161591.8818219-37003-269838671620076/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:46:32 compute-0 sudo[72308]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:33 compute-0 sudo[72358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbehcylnogrtmcsctfjjogmvadmlajxn ; /usr/bin/python3'
Jan 23 09:46:33 compute-0 sudo[72358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:33 compute-0 python3[72360]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:46:33 compute-0 systemd[1]: Reloading.
Jan 23 09:46:33 compute-0 systemd-sysv-generator[72391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:46:33 compute-0 systemd-rc-local-generator[72384]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:46:33 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 23 09:46:33 compute-0 bash[72400]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 23 09:46:33 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 23 09:46:33 compute-0 sudo[72358]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:33 compute-0 lvm[72402]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:46:33 compute-0 lvm[72402]: VG ceph_vg0 finished
Jan 23 09:46:36 compute-0 python3[72426]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:46:38 compute-0 sudo[72517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pimnmvyrxcpdcnzsebyvhiaiadaygesd ; /usr/bin/python3'
Jan 23 09:46:38 compute-0 sudo[72517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:38 compute-0 python3[72519]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 09:46:41 compute-0 sudo[72517]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:41 compute-0 sudo[72575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jusbtwvtlgpfpdvfotzocaczkcpncuhe ; /usr/bin/python3'
Jan 23 09:46:41 compute-0 sudo[72575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:41 compute-0 python3[72577]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 09:46:45 compute-0 groupadd[72588]: group added to /etc/group: name=cephadm, GID=993
Jan 23 09:46:45 compute-0 groupadd[72588]: group added to /etc/gshadow: name=cephadm
Jan 23 09:46:45 compute-0 groupadd[72588]: new group: name=cephadm, GID=993
Jan 23 09:46:45 compute-0 useradd[72595]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 23 09:46:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 09:46:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 09:46:46 compute-0 sudo[72575]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:46 compute-0 sudo[72690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htypilezyyacdqcaqdssdtrptxzsjovq ; /usr/bin/python3'
Jan 23 09:46:46 compute-0 sudo[72690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:46 compute-0 python3[72692]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 09:46:46 compute-0 sudo[72690]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 09:46:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 09:46:46 compute-0 systemd[1]: run-r3d620d0531c44b0ea54b525d60138a91.service: Deactivated successfully.
Jan 23 09:46:46 compute-0 sudo[72719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jumsauhpqdodbkrxqpztyjvjnxjmlzxp ; /usr/bin/python3'
Jan 23 09:46:46 compute-0 sudo[72719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:46 compute-0 python3[72721]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:46:47 compute-0 sudo[72719]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:47 compute-0 sudo[72782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymzdygnkiezmilubwbqprfsjefsbhwjw ; /usr/bin/python3'
Jan 23 09:46:47 compute-0 sudo[72782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:47 compute-0 python3[72784]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:46:47 compute-0 sudo[72782]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:48 compute-0 sudo[72808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghcouofajvavqgungdxoryjgakwyislj ; /usr/bin/python3'
Jan 23 09:46:48 compute-0 sudo[72808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:48 compute-0 python3[72810]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:46:48 compute-0 sudo[72808]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:48 compute-0 sudo[72886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtjyzkiswowvnupkyawpwybsaiqhsalo ; /usr/bin/python3'
Jan 23 09:46:48 compute-0 sudo[72886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:48 compute-0 python3[72888]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:46:48 compute-0 sudo[72886]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:49 compute-0 sudo[72959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qywxbhuobbyclvrjykrcegohcajpyarh ; /usr/bin/python3'
Jan 23 09:46:49 compute-0 sudo[72959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:49 compute-0 python3[72961]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769161608.631673-37195-18738793067612/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:46:49 compute-0 sudo[72959]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:49 compute-0 sudo[73061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhwfymndvcbtokmbgeplfxzccbjoygeo ; /usr/bin/python3'
Jan 23 09:46:49 compute-0 sudo[73061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:50 compute-0 python3[73063]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:46:50 compute-0 sudo[73061]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:50 compute-0 sudo[73134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bevjvemwwbezxxhvnaninpjhddisahwp ; /usr/bin/python3'
Jan 23 09:46:50 compute-0 sudo[73134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:50 compute-0 python3[73136]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769161609.794813-37213-11545630527729/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:46:50 compute-0 sudo[73134]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:50 compute-0 sudo[73184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrpsfhaqrczbbewscpffxrhmpcmnastw ; /usr/bin/python3'
Jan 23 09:46:50 compute-0 sudo[73184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:50 compute-0 python3[73186]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 09:46:50 compute-0 sudo[73184]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:50 compute-0 sudo[73212]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ignwhlphdamahjarcnsgkjyccsiqkqcw ; /usr/bin/python3'
Jan 23 09:46:50 compute-0 sudo[73212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:51 compute-0 python3[73214]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 09:46:51 compute-0 sudo[73212]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:51 compute-0 sudo[73240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueszrmxiplkrvhtcbujqznsgkemtumzs ; /usr/bin/python3'
Jan 23 09:46:51 compute-0 sudo[73240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:51 compute-0 python3[73242]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 09:46:51 compute-0 sudo[73240]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:51 compute-0 python3[73268]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 09:46:52 compute-0 sudo[73292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbxkvaskbrirqniyeuernsqieyokwrfa ; /usr/bin/python3'
Jan 23 09:46:52 compute-0 sudo[73292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:46:52 compute-0 python3[73294]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:46:52 compute-0 sshd-session[73298]: Accepted publickey for ceph-admin from 192.168.122.100 port 46140 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:46:52 compute-0 systemd-logind[784]: New session 19 of user ceph-admin.
Jan 23 09:46:52 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 23 09:46:52 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 23 09:46:52 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 23 09:46:52 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 23 09:46:52 compute-0 systemd[73302]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:46:52 compute-0 systemd[73302]: Queued start job for default target Main User Target.
Jan 23 09:46:52 compute-0 systemd[73302]: Created slice User Application Slice.
Jan 23 09:46:52 compute-0 systemd[73302]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 09:46:52 compute-0 systemd[73302]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 09:46:52 compute-0 systemd[73302]: Reached target Paths.
Jan 23 09:46:52 compute-0 systemd[73302]: Reached target Timers.
Jan 23 09:46:52 compute-0 systemd[73302]: Starting D-Bus User Message Bus Socket...
Jan 23 09:46:52 compute-0 systemd[73302]: Starting Create User's Volatile Files and Directories...
Jan 23 09:46:52 compute-0 systemd[73302]: Finished Create User's Volatile Files and Directories.
Jan 23 09:46:52 compute-0 systemd[73302]: Listening on D-Bus User Message Bus Socket.
Jan 23 09:46:52 compute-0 systemd[73302]: Reached target Sockets.
Jan 23 09:46:52 compute-0 systemd[73302]: Reached target Basic System.
Jan 23 09:46:52 compute-0 systemd[73302]: Reached target Main User Target.
Jan 23 09:46:52 compute-0 systemd[73302]: Startup finished in 131ms.
Jan 23 09:46:52 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 23 09:46:52 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 23 09:46:52 compute-0 sshd-session[73298]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:46:52 compute-0 sudo[73318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 23 09:46:52 compute-0 sudo[73318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:46:52 compute-0 sudo[73318]: pam_unix(sudo:session): session closed for user root
Jan 23 09:46:52 compute-0 sshd-session[73317]: Received disconnect from 192.168.122.100 port 46140:11: disconnected by user
Jan 23 09:46:52 compute-0 sshd-session[73317]: Disconnected from user ceph-admin 192.168.122.100 port 46140
Jan 23 09:46:52 compute-0 sshd-session[73298]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:46:52 compute-0 systemd-logind[784]: Session 19 logged out. Waiting for processes to exit.
Jan 23 09:46:52 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 23 09:46:52 compute-0 systemd-logind[784]: Removed session 19.
Jan 23 09:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1465280559-merged.mount: Deactivated successfully.
Jan 23 09:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1465280559-lower\x2dmapped.mount: Deactivated successfully.
Jan 23 09:47:02 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 23 09:47:02 compute-0 systemd[73302]: Activating special unit Exit the Session...
Jan 23 09:47:02 compute-0 systemd[73302]: Stopped target Main User Target.
Jan 23 09:47:02 compute-0 systemd[73302]: Stopped target Basic System.
Jan 23 09:47:02 compute-0 systemd[73302]: Stopped target Paths.
Jan 23 09:47:02 compute-0 systemd[73302]: Stopped target Sockets.
Jan 23 09:47:02 compute-0 systemd[73302]: Stopped target Timers.
Jan 23 09:47:02 compute-0 systemd[73302]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 23 09:47:02 compute-0 systemd[73302]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 09:47:02 compute-0 systemd[73302]: Closed D-Bus User Message Bus Socket.
Jan 23 09:47:02 compute-0 systemd[73302]: Stopped Create User's Volatile Files and Directories.
Jan 23 09:47:02 compute-0 systemd[73302]: Removed slice User Application Slice.
Jan 23 09:47:02 compute-0 systemd[73302]: Reached target Shutdown.
Jan 23 09:47:02 compute-0 systemd[73302]: Finished Exit the Session.
Jan 23 09:47:02 compute-0 systemd[73302]: Reached target Exit the Session.
Jan 23 09:47:02 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 23 09:47:02 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 23 09:47:02 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 23 09:47:02 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 23 09:47:02 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 23 09:47:02 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 23 09:47:03 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 23 09:47:33 compute-0 podman[73395]: 2026-01-23 09:47:33.118590791 +0000 UTC m=+40.064847330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:47:33 compute-0 podman[73457]: 2026-01-23 09:47:33.199991505 +0000 UTC m=+0.053120277 container create 4fa3942d2194db80fd3da0f0ed18bf3a667aabc3319af7cd095257d65fa36f32 (image=quay.io/ceph/ceph:v19, name=agitated_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:33 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 23 09:47:33 compute-0 systemd[1]: Started libpod-conmon-4fa3942d2194db80fd3da0f0ed18bf3a667aabc3319af7cd095257d65fa36f32.scope.
Jan 23 09:47:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:33 compute-0 podman[73457]: 2026-01-23 09:47:33.173612822 +0000 UTC m=+0.026741624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:33 compute-0 podman[73457]: 2026-01-23 09:47:33.297607521 +0000 UTC m=+0.150736323 container init 4fa3942d2194db80fd3da0f0ed18bf3a667aabc3319af7cd095257d65fa36f32 (image=quay.io/ceph/ceph:v19, name=agitated_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 09:47:33 compute-0 podman[73457]: 2026-01-23 09:47:33.30561197 +0000 UTC m=+0.158740752 container start 4fa3942d2194db80fd3da0f0ed18bf3a667aabc3319af7cd095257d65fa36f32 (image=quay.io/ceph/ceph:v19, name=agitated_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:47:33 compute-0 podman[73457]: 2026-01-23 09:47:33.309525481 +0000 UTC m=+0.162654253 container attach 4fa3942d2194db80fd3da0f0ed18bf3a667aabc3319af7cd095257d65fa36f32 (image=quay.io/ceph/ceph:v19, name=agitated_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:47:33 compute-0 agitated_wescoff[73474]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 23 09:47:33 compute-0 systemd[1]: libpod-4fa3942d2194db80fd3da0f0ed18bf3a667aabc3319af7cd095257d65fa36f32.scope: Deactivated successfully.
Jan 23 09:47:33 compute-0 podman[73457]: 2026-01-23 09:47:33.418520912 +0000 UTC m=+0.271649704 container died 4fa3942d2194db80fd3da0f0ed18bf3a667aabc3319af7cd095257d65fa36f32 (image=quay.io/ceph/ceph:v19, name=agitated_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 23 09:47:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a63631f5bb1d4c2fff3a728c3abb7e0778d5ccb5ea5d835859f825b464538f6-merged.mount: Deactivated successfully.
Jan 23 09:47:33 compute-0 podman[73457]: 2026-01-23 09:47:33.796223053 +0000 UTC m=+0.649351825 container remove 4fa3942d2194db80fd3da0f0ed18bf3a667aabc3319af7cd095257d65fa36f32 (image=quay.io/ceph/ceph:v19, name=agitated_wescoff, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 09:47:33 compute-0 podman[73491]: 2026-01-23 09:47:33.86619592 +0000 UTC m=+0.045215321 container create c58ddb2db5865d7355ff12e4e74e99c32f3c969aeb7ed38b105e10de87a5693a (image=quay.io/ceph/ceph:v19, name=silly_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 09:47:33 compute-0 systemd[1]: Started libpod-conmon-c58ddb2db5865d7355ff12e4e74e99c32f3c969aeb7ed38b105e10de87a5693a.scope.
Jan 23 09:47:33 compute-0 systemd[1]: libpod-conmon-4fa3942d2194db80fd3da0f0ed18bf3a667aabc3319af7cd095257d65fa36f32.scope: Deactivated successfully.
Jan 23 09:47:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:33 compute-0 podman[73491]: 2026-01-23 09:47:33.844621454 +0000 UTC m=+0.023640785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:33 compute-0 podman[73491]: 2026-01-23 09:47:33.942814057 +0000 UTC m=+0.121833388 container init c58ddb2db5865d7355ff12e4e74e99c32f3c969aeb7ed38b105e10de87a5693a (image=quay.io/ceph/ceph:v19, name=silly_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:47:33 compute-0 podman[73491]: 2026-01-23 09:47:33.94818184 +0000 UTC m=+0.127201151 container start c58ddb2db5865d7355ff12e4e74e99c32f3c969aeb7ed38b105e10de87a5693a (image=quay.io/ceph/ceph:v19, name=silly_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:47:33 compute-0 podman[73491]: 2026-01-23 09:47:33.951734332 +0000 UTC m=+0.130753673 container attach c58ddb2db5865d7355ff12e4e74e99c32f3c969aeb7ed38b105e10de87a5693a (image=quay.io/ceph/ceph:v19, name=silly_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:47:33 compute-0 silly_mendeleev[73507]: 167 167
Jan 23 09:47:33 compute-0 systemd[1]: libpod-c58ddb2db5865d7355ff12e4e74e99c32f3c969aeb7ed38b105e10de87a5693a.scope: Deactivated successfully.
Jan 23 09:47:33 compute-0 podman[73491]: 2026-01-23 09:47:33.953164712 +0000 UTC m=+0.132184044 container died c58ddb2db5865d7355ff12e4e74e99c32f3c969aeb7ed38b105e10de87a5693a (image=quay.io/ceph/ceph:v19, name=silly_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 09:47:33 compute-0 podman[73491]: 2026-01-23 09:47:33.993423452 +0000 UTC m=+0.172442763 container remove c58ddb2db5865d7355ff12e4e74e99c32f3c969aeb7ed38b105e10de87a5693a (image=quay.io/ceph/ceph:v19, name=silly_mendeleev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 09:47:34 compute-0 systemd[1]: libpod-conmon-c58ddb2db5865d7355ff12e4e74e99c32f3c969aeb7ed38b105e10de87a5693a.scope: Deactivated successfully.
Jan 23 09:47:34 compute-0 podman[73525]: 2026-01-23 09:47:34.086075226 +0000 UTC m=+0.066483078 container create 803ce5a7997b2eec490f1e9d806924cfe5b5fbf50118a0d94bbc1109312648b0 (image=quay.io/ceph/ceph:v19, name=compassionate_noyce, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 09:47:34 compute-0 systemd[1]: Started libpod-conmon-803ce5a7997b2eec490f1e9d806924cfe5b5fbf50118a0d94bbc1109312648b0.scope.
Jan 23 09:47:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:34 compute-0 podman[73525]: 2026-01-23 09:47:34.044566341 +0000 UTC m=+0.024974223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:34 compute-0 podman[73525]: 2026-01-23 09:47:34.148775116 +0000 UTC m=+0.129182988 container init 803ce5a7997b2eec490f1e9d806924cfe5b5fbf50118a0d94bbc1109312648b0 (image=quay.io/ceph/ceph:v19, name=compassionate_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 09:47:34 compute-0 podman[73525]: 2026-01-23 09:47:34.155030404 +0000 UTC m=+0.135438256 container start 803ce5a7997b2eec490f1e9d806924cfe5b5fbf50118a0d94bbc1109312648b0 (image=quay.io/ceph/ceph:v19, name=compassionate_noyce, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:47:34 compute-0 podman[73525]: 2026-01-23 09:47:34.158547305 +0000 UTC m=+0.138955187 container attach 803ce5a7997b2eec490f1e9d806924cfe5b5fbf50118a0d94bbc1109312648b0 (image=quay.io/ceph/ceph:v19, name=compassionate_noyce, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 09:47:34 compute-0 compassionate_noyce[73541]: AQC2Q3Np/VuWChAAGwFnkQuURy9PAB3ZFh3nKQ==
Jan 23 09:47:34 compute-0 systemd[1]: libpod-803ce5a7997b2eec490f1e9d806924cfe5b5fbf50118a0d94bbc1109312648b0.scope: Deactivated successfully.
Jan 23 09:47:34 compute-0 podman[73525]: 2026-01-23 09:47:34.18289696 +0000 UTC m=+0.163304812 container died 803ce5a7997b2eec490f1e9d806924cfe5b5fbf50118a0d94bbc1109312648b0 (image=quay.io/ceph/ceph:v19, name=compassionate_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-489de9f830707f95362228a5e35977b38874c3205c31d168b0b44904a93b0f70-merged.mount: Deactivated successfully.
Jan 23 09:47:34 compute-0 podman[73525]: 2026-01-23 09:47:34.25752368 +0000 UTC m=+0.237931532 container remove 803ce5a7997b2eec490f1e9d806924cfe5b5fbf50118a0d94bbc1109312648b0 (image=quay.io/ceph/ceph:v19, name=compassionate_noyce, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 09:47:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:47:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:47:34 compute-0 systemd[1]: libpod-conmon-803ce5a7997b2eec490f1e9d806924cfe5b5fbf50118a0d94bbc1109312648b0.scope: Deactivated successfully.
Jan 23 09:47:34 compute-0 podman[73561]: 2026-01-23 09:47:34.315608178 +0000 UTC m=+0.039823118 container create fc7805cab20feded281c8f77ede555ae40b5141a1b9a5f649f505f813214504d (image=quay.io/ceph/ceph:v19, name=wizardly_herschel, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:47:34 compute-0 systemd[1]: Started libpod-conmon-fc7805cab20feded281c8f77ede555ae40b5141a1b9a5f649f505f813214504d.scope.
Jan 23 09:47:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:34 compute-0 podman[73561]: 2026-01-23 09:47:34.387892431 +0000 UTC m=+0.112107391 container init fc7805cab20feded281c8f77ede555ae40b5141a1b9a5f649f505f813214504d (image=quay.io/ceph/ceph:v19, name=wizardly_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:34 compute-0 podman[73561]: 2026-01-23 09:47:34.297928063 +0000 UTC m=+0.022143033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:34 compute-0 podman[73561]: 2026-01-23 09:47:34.394467079 +0000 UTC m=+0.118682019 container start fc7805cab20feded281c8f77ede555ae40b5141a1b9a5f649f505f813214504d (image=quay.io/ceph/ceph:v19, name=wizardly_herschel, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 09:47:34 compute-0 podman[73561]: 2026-01-23 09:47:34.398073601 +0000 UTC m=+0.122288561 container attach fc7805cab20feded281c8f77ede555ae40b5141a1b9a5f649f505f813214504d (image=quay.io/ceph/ceph:v19, name=wizardly_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:34 compute-0 wizardly_herschel[73577]: AQC2Q3NpdIrxGBAAJ/IroxHtEMZ7ds3oLDcdzw==
Jan 23 09:47:34 compute-0 systemd[1]: libpod-fc7805cab20feded281c8f77ede555ae40b5141a1b9a5f649f505f813214504d.scope: Deactivated successfully.
Jan 23 09:47:34 compute-0 podman[73561]: 2026-01-23 09:47:34.422802307 +0000 UTC m=+0.147017257 container died fc7805cab20feded281c8f77ede555ae40b5141a1b9a5f649f505f813214504d (image=quay.io/ceph/ceph:v19, name=wizardly_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 09:47:34 compute-0 podman[73561]: 2026-01-23 09:47:34.640439529 +0000 UTC m=+0.364654469 container remove fc7805cab20feded281c8f77ede555ae40b5141a1b9a5f649f505f813214504d (image=quay.io/ceph/ceph:v19, name=wizardly_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:34 compute-0 systemd[1]: libpod-conmon-fc7805cab20feded281c8f77ede555ae40b5141a1b9a5f649f505f813214504d.scope: Deactivated successfully.
Jan 23 09:47:34 compute-0 podman[73596]: 2026-01-23 09:47:34.708915034 +0000 UTC m=+0.046434787 container create 284704060c0cd6f975725e394408a9e6594032f1ba88b9d09ede9a61192b27c2 (image=quay.io/ceph/ceph:v19, name=musing_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:47:34 compute-0 systemd[1]: Started libpod-conmon-284704060c0cd6f975725e394408a9e6594032f1ba88b9d09ede9a61192b27c2.scope.
Jan 23 09:47:34 compute-0 podman[73596]: 2026-01-23 09:47:34.686521344 +0000 UTC m=+0.024041117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:34 compute-0 podman[73596]: 2026-01-23 09:47:34.809981098 +0000 UTC m=+0.147500881 container init 284704060c0cd6f975725e394408a9e6594032f1ba88b9d09ede9a61192b27c2 (image=quay.io/ceph/ceph:v19, name=musing_einstein, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 09:47:34 compute-0 podman[73596]: 2026-01-23 09:47:34.816381791 +0000 UTC m=+0.153901544 container start 284704060c0cd6f975725e394408a9e6594032f1ba88b9d09ede9a61192b27c2 (image=quay.io/ceph/ceph:v19, name=musing_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 09:47:34 compute-0 podman[73596]: 2026-01-23 09:47:34.819573882 +0000 UTC m=+0.157093635 container attach 284704060c0cd6f975725e394408a9e6594032f1ba88b9d09ede9a61192b27c2 (image=quay.io/ceph/ceph:v19, name=musing_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 09:47:34 compute-0 musing_einstein[73611]: AQC2Q3NpfQDPMRAAK5K8hLN4DYY2flZagDcbdw==
Jan 23 09:47:34 compute-0 systemd[1]: libpod-284704060c0cd6f975725e394408a9e6594032f1ba88b9d09ede9a61192b27c2.scope: Deactivated successfully.
Jan 23 09:47:34 compute-0 podman[73596]: 2026-01-23 09:47:34.839267274 +0000 UTC m=+0.176787027 container died 284704060c0cd6f975725e394408a9e6594032f1ba88b9d09ede9a61192b27c2 (image=quay.io/ceph/ceph:v19, name=musing_einstein, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 09:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-797669d6a243c1d7423c5da1bed6d8a000c4ec63712fb18fb1884f14f8611596-merged.mount: Deactivated successfully.
Jan 23 09:47:35 compute-0 podman[73596]: 2026-01-23 09:47:35.175563423 +0000 UTC m=+0.513083176 container remove 284704060c0cd6f975725e394408a9e6594032f1ba88b9d09ede9a61192b27c2 (image=quay.io/ceph/ceph:v19, name=musing_einstein, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 09:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:47:35 compute-0 systemd[1]: libpod-conmon-284704060c0cd6f975725e394408a9e6594032f1ba88b9d09ede9a61192b27c2.scope: Deactivated successfully.
Jan 23 09:47:35 compute-0 podman[73631]: 2026-01-23 09:47:35.308630411 +0000 UTC m=+0.112547563 container create 48000a729ff523b990f5a69dc8df34b7c0e05266ce062bb1cb208f3b6f79fcea (image=quay.io/ceph/ceph:v19, name=zealous_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:47:35 compute-0 systemd[1]: Started libpod-conmon-48000a729ff523b990f5a69dc8df34b7c0e05266ce062bb1cb208f3b6f79fcea.scope.
Jan 23 09:47:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be7643a41b5def139111f3f6e3c56e8d39433937b9b11d1d008b55d2a1b8ffca/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:35 compute-0 podman[73631]: 2026-01-23 09:47:35.290943516 +0000 UTC m=+0.094860688 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:35 compute-0 podman[73631]: 2026-01-23 09:47:35.466785485 +0000 UTC m=+0.270702657 container init 48000a729ff523b990f5a69dc8df34b7c0e05266ce062bb1cb208f3b6f79fcea (image=quay.io/ceph/ceph:v19, name=zealous_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:47:35 compute-0 podman[73631]: 2026-01-23 09:47:35.471712026 +0000 UTC m=+0.275629178 container start 48000a729ff523b990f5a69dc8df34b7c0e05266ce062bb1cb208f3b6f79fcea (image=quay.io/ceph/ceph:v19, name=zealous_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 09:47:35 compute-0 podman[73631]: 2026-01-23 09:47:35.481702211 +0000 UTC m=+0.285619393 container attach 48000a729ff523b990f5a69dc8df34b7c0e05266ce062bb1cb208f3b6f79fcea (image=quay.io/ceph/ceph:v19, name=zealous_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:35 compute-0 zealous_spence[73648]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 23 09:47:35 compute-0 zealous_spence[73648]: setting min_mon_release = quincy
Jan 23 09:47:35 compute-0 zealous_spence[73648]: /usr/bin/monmaptool: set fsid to f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:35 compute-0 zealous_spence[73648]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 23 09:47:35 compute-0 systemd[1]: libpod-48000a729ff523b990f5a69dc8df34b7c0e05266ce062bb1cb208f3b6f79fcea.scope: Deactivated successfully.
Jan 23 09:47:35 compute-0 podman[73631]: 2026-01-23 09:47:35.50374471 +0000 UTC m=+0.307661862 container died 48000a729ff523b990f5a69dc8df34b7c0e05266ce062bb1cb208f3b6f79fcea (image=quay.io/ceph/ceph:v19, name=zealous_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:47:35 compute-0 podman[73631]: 2026-01-23 09:47:35.562369024 +0000 UTC m=+0.366286176 container remove 48000a729ff523b990f5a69dc8df34b7c0e05266ce062bb1cb208f3b6f79fcea (image=quay.io/ceph/ceph:v19, name=zealous_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Jan 23 09:47:35 compute-0 systemd[1]: libpod-conmon-48000a729ff523b990f5a69dc8df34b7c0e05266ce062bb1cb208f3b6f79fcea.scope: Deactivated successfully.
Jan 23 09:47:35 compute-0 podman[73667]: 2026-01-23 09:47:35.625560247 +0000 UTC m=+0.042676589 container create c2ff1d49b0280cedd6a20e41c8711a3321eec01f1869c623a21536ea0fd6744f (image=quay.io/ceph/ceph:v19, name=xenodochial_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:47:35 compute-0 systemd[1]: Started libpod-conmon-c2ff1d49b0280cedd6a20e41c8711a3321eec01f1869c623a21536ea0fd6744f.scope.
Jan 23 09:47:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3459e099f5a1f07338f4dd3277e77f518a49a44ef9649f2c92845a431ad3fa4/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3459e099f5a1f07338f4dd3277e77f518a49a44ef9649f2c92845a431ad3fa4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3459e099f5a1f07338f4dd3277e77f518a49a44ef9649f2c92845a431ad3fa4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3459e099f5a1f07338f4dd3277e77f518a49a44ef9649f2c92845a431ad3fa4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:35 compute-0 podman[73667]: 2026-01-23 09:47:35.603364454 +0000 UTC m=+0.020480816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:35 compute-0 podman[73667]: 2026-01-23 09:47:35.704915672 +0000 UTC m=+0.122032034 container init c2ff1d49b0280cedd6a20e41c8711a3321eec01f1869c623a21536ea0fd6744f (image=quay.io/ceph/ceph:v19, name=xenodochial_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:35 compute-0 podman[73667]: 2026-01-23 09:47:35.710202723 +0000 UTC m=+0.127319055 container start c2ff1d49b0280cedd6a20e41c8711a3321eec01f1869c623a21536ea0fd6744f (image=quay.io/ceph/ceph:v19, name=xenodochial_chaplygin, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 09:47:35 compute-0 podman[73667]: 2026-01-23 09:47:35.722530355 +0000 UTC m=+0.139646717 container attach c2ff1d49b0280cedd6a20e41c8711a3321eec01f1869c623a21536ea0fd6744f (image=quay.io/ceph/ceph:v19, name=xenodochial_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 09:47:35 compute-0 systemd[1]: libpod-c2ff1d49b0280cedd6a20e41c8711a3321eec01f1869c623a21536ea0fd6744f.scope: Deactivated successfully.
Jan 23 09:47:35 compute-0 podman[73667]: 2026-01-23 09:47:35.885577239 +0000 UTC m=+0.302693591 container died c2ff1d49b0280cedd6a20e41c8711a3321eec01f1869c623a21536ea0fd6744f (image=quay.io/ceph/ceph:v19, name=xenodochial_chaplygin, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 09:47:35 compute-0 podman[73667]: 2026-01-23 09:47:35.938372976 +0000 UTC m=+0.355489318 container remove c2ff1d49b0280cedd6a20e41c8711a3321eec01f1869c623a21536ea0fd6744f (image=quay.io/ceph/ceph:v19, name=xenodochial_chaplygin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 09:47:35 compute-0 systemd[1]: libpod-conmon-c2ff1d49b0280cedd6a20e41c8711a3321eec01f1869c623a21536ea0fd6744f.scope: Deactivated successfully.
Jan 23 09:47:36 compute-0 systemd[1]: Reloading.
Jan 23 09:47:36 compute-0 systemd-rc-local-generator[73749]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:47:36 compute-0 systemd-sysv-generator[73754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:47:36 compute-0 systemd[1]: Reloading.
Jan 23 09:47:36 compute-0 systemd-rc-local-generator[73789]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:47:36 compute-0 systemd-sysv-generator[73792]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:47:36 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 23 09:47:36 compute-0 systemd[1]: Reloading.
Jan 23 09:47:36 compute-0 systemd-sysv-generator[73829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:47:36 compute-0 systemd-rc-local-generator[73826]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:47:36 compute-0 systemd[1]: Reached target Ceph cluster f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:47:36 compute-0 systemd[1]: Reloading.
Jan 23 09:47:36 compute-0 systemd-rc-local-generator[73863]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:47:36 compute-0 systemd-sysv-generator[73867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:47:37 compute-0 systemd[1]: Reloading.
Jan 23 09:47:37 compute-0 systemd-rc-local-generator[73903]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:47:37 compute-0 systemd-sysv-generator[73907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:47:37 compute-0 systemd[1]: Created slice Slice /system/ceph-f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:47:37 compute-0 systemd[1]: Reached target System Time Set.
Jan 23 09:47:37 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 23 09:47:37 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:47:37 compute-0 podman[73961]: 2026-01-23 09:47:37.654791728 +0000 UTC m=+0.051897870 container create 21e4f1d69f673838392ca8ce580f7f79e14823c7c8a8422ad5c6d4e9aaef08c6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 09:47:37 compute-0 podman[73961]: 2026-01-23 09:47:37.625954184 +0000 UTC m=+0.023060326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3b947d518cd8007e49dec742f61e94a15342d02a7f579960ba38a202c662ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3b947d518cd8007e49dec742f61e94a15342d02a7f579960ba38a202c662ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3b947d518cd8007e49dec742f61e94a15342d02a7f579960ba38a202c662ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3b947d518cd8007e49dec742f61e94a15342d02a7f579960ba38a202c662ad/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:38 compute-0 podman[73961]: 2026-01-23 09:47:38.207040472 +0000 UTC m=+0.604146664 container init 21e4f1d69f673838392ca8ce580f7f79e14823c7c8a8422ad5c6d4e9aaef08c6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:47:38 compute-0 podman[73961]: 2026-01-23 09:47:38.212649716 +0000 UTC m=+0.609755868 container start 21e4f1d69f673838392ca8ce580f7f79e14823c7c8a8422ad5c6d4e9aaef08c6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:47:38 compute-0 bash[73961]: 21e4f1d69f673838392ca8ce580f7f79e14823c7c8a8422ad5c6d4e9aaef08c6
Jan 23 09:47:38 compute-0 systemd[1]: Started Ceph mon.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:47:38 compute-0 ceph-mon[73981]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 23 09:47:38 compute-0 ceph-mon[73981]: pidfile_write: ignore empty --pid-file
Jan 23 09:47:38 compute-0 ceph-mon[73981]: load: jerasure load: lrc 
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: RocksDB version: 7.9.2
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Git sha 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: DB SUMMARY
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: DB Session ID:  N31MRFZSCEZVWZ5FDYJI
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: CURRENT file:  CURRENT
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                         Options.error_if_exists: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                       Options.create_if_missing: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                                     Options.env: 0x55c605f7bc20
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                                Options.info_log: 0x55c606da6d60
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                              Options.statistics: (nil)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                               Options.use_fsync: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                              Options.db_log_dir: 
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                                 Options.wal_dir: 
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                    Options.write_buffer_manager: 0x55c606dab900
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.unordered_write: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                               Options.row_cache: None
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                              Options.wal_filter: None
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.two_write_queues: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.wal_compression: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.atomic_flush: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.max_background_jobs: 2
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.max_background_compactions: -1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.max_subcompactions: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.max_total_wal_size: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                          Options.max_open_files: -1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:       Options.compaction_readahead_size: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Compression algorithms supported:
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         kZSTD supported: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         kXpressCompression supported: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         kBZip2Compression supported: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         kLZ4Compression supported: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         kZlibCompression supported: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         kSnappyCompression supported: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:           Options.merge_operator: 
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:        Options.compaction_filter: None
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c606da6500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c606dcb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:        Options.write_buffer_size: 33554432
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:  Options.max_write_buffer_number: 2
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:          Options.compression: NoCompression
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.num_levels: 7
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dfd65f37-5d13-4bd7-9c84-01e95a04d6c8
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161658255609, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161658329197, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "N31MRFZSCEZVWZ5FDYJI", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161658329423, "job": 1, "event": "recovery_finished"}
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c606dcce00
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: DB pointer 0x55c606ed6000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 09:47:38 compute-0 ceph-mon[73981]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.074       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c606dcb350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 09:47:38 compute-0 ceph-mon[73981]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@-1(???) e0 preinit fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 23 09:47:38 compute-0 ceph-mon[73981]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 23 09:47:38 compute-0 podman[74003]: 2026-01-23 09:47:38.554835661 +0000 UTC m=+0.058863114 container create 9e704c375296b8b1148382393028862961830af77710cda41f416bf91a73a2d3 (image=quay.io/ceph/ceph:v19, name=quizzical_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 23 09:47:38 compute-0 ceph-mon[73981]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : last_changed 2026-01-23T09:47:35.499222+0000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : created 2026-01-23T09:47:35.499222+0000
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).mds e1 new map
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-23T09:47:38:565964+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 09:47:38 compute-0 systemd[1]: Started libpod-conmon-9e704c375296b8b1148382393028862961830af77710cda41f416bf91a73a2d3.scope.
Jan 23 09:47:38 compute-0 podman[74003]: 2026-01-23 09:47:38.521127004 +0000 UTC m=+0.025154517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:38 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a659dae052c6a10a8c705a1c1f2f1f482261b61d42fd1da946bb5675b94389cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a659dae052c6a10a8c705a1c1f2f1f482261b61d42fd1da946bb5675b94389cb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a659dae052c6a10a8c705a1c1f2f1f482261b61d42fd1da946bb5675b94389cb/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 23 09:47:38 compute-0 podman[74003]: 2026-01-23 09:47:38.686884906 +0000 UTC m=+0.190912379 container init 9e704c375296b8b1148382393028862961830af77710cda41f416bf91a73a2d3 (image=quay.io/ceph/ceph:v19, name=quizzical_chandrasekhar, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mkfs f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:38 compute-0 podman[74003]: 2026-01-23 09:47:38.693697735 +0000 UTC m=+0.197725188 container start 9e704c375296b8b1148382393028862961830af77710cda41f416bf91a73a2d3 (image=quay.io/ceph/ceph:v19, name=quizzical_chandrasekhar, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 23 09:47:38 compute-0 podman[74003]: 2026-01-23 09:47:38.696979521 +0000 UTC m=+0.201006974 container attach 9e704c375296b8b1148382393028862961830af77710cda41f416bf91a73a2d3 (image=quay.io/ceph/ceph:v19, name=quizzical_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 23 09:47:38 compute-0 ceph-mon[73981]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883030653' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:   cluster:
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:     id:     f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:     health: HEALTH_OK
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:  
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:   services:
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:     mon: 1 daemons, quorum compute-0 (age 0.336292s)
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:     mgr: no daemons active
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:     osd: 0 osds: 0 up, 0 in
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:  
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:   data:
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:     pools:   0 pools, 0 pgs
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:     objects: 0 objects, 0 B
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:     usage:   0 B used, 0 B / 0 B avail
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:     pgs:     
Jan 23 09:47:38 compute-0 quizzical_chandrasekhar[74036]:  
Jan 23 09:47:38 compute-0 systemd[1]: libpod-9e704c375296b8b1148382393028862961830af77710cda41f416bf91a73a2d3.scope: Deactivated successfully.
Jan 23 09:47:38 compute-0 podman[74062]: 2026-01-23 09:47:38.954516019 +0000 UTC m=+0.022309264 container died 9e704c375296b8b1148382393028862961830af77710cda41f416bf91a73a2d3 (image=quay.io/ceph/ceph:v19, name=quizzical_chandrasekhar, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 09:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a659dae052c6a10a8c705a1c1f2f1f482261b61d42fd1da946bb5675b94389cb-merged.mount: Deactivated successfully.
Jan 23 09:47:39 compute-0 podman[74062]: 2026-01-23 09:47:39.692035115 +0000 UTC m=+0.759828340 container remove 9e704c375296b8b1148382393028862961830af77710cda41f416bf91a73a2d3 (image=quay.io/ceph/ceph:v19, name=quizzical_chandrasekhar, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 09:47:39 compute-0 systemd[1]: libpod-conmon-9e704c375296b8b1148382393028862961830af77710cda41f416bf91a73a2d3.scope: Deactivated successfully.
Jan 23 09:47:39 compute-0 podman[74077]: 2026-01-23 09:47:39.815885669 +0000 UTC m=+0.095254708 container create 0ef8d6e7cf3dad07ffc452382175b1a1f72a9dbff263c20fd2beca65ec9662a8 (image=quay.io/ceph/ceph:v19, name=magical_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:39 compute-0 ceph-mon[73981]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 09:47:39 compute-0 ceph-mon[73981]: monmap epoch 1
Jan 23 09:47:39 compute-0 ceph-mon[73981]: fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:39 compute-0 ceph-mon[73981]: last_changed 2026-01-23T09:47:35.499222+0000
Jan 23 09:47:39 compute-0 ceph-mon[73981]: created 2026-01-23T09:47:35.499222+0000
Jan 23 09:47:39 compute-0 ceph-mon[73981]: min_mon_release 19 (squid)
Jan 23 09:47:39 compute-0 ceph-mon[73981]: election_strategy: 1
Jan 23 09:47:39 compute-0 ceph-mon[73981]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 09:47:39 compute-0 ceph-mon[73981]: fsmap 
Jan 23 09:47:39 compute-0 ceph-mon[73981]: osdmap e1: 0 total, 0 up, 0 in
Jan 23 09:47:39 compute-0 ceph-mon[73981]: mgrmap e1: no daemons active
Jan 23 09:47:39 compute-0 ceph-mon[73981]: from='client.? 192.168.122.100:0/2883030653' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 09:47:39 compute-0 podman[74077]: 2026-01-23 09:47:39.744732267 +0000 UTC m=+0.024101326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:39 compute-0 systemd[1]: Started libpod-conmon-0ef8d6e7cf3dad07ffc452382175b1a1f72a9dbff263c20fd2beca65ec9662a8.scope.
Jan 23 09:47:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24fc5416818650db440d6d919d76011dfb1a206c3897deff1f2882bb75188196/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24fc5416818650db440d6d919d76011dfb1a206c3897deff1f2882bb75188196/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24fc5416818650db440d6d919d76011dfb1a206c3897deff1f2882bb75188196/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24fc5416818650db440d6d919d76011dfb1a206c3897deff1f2882bb75188196/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:39 compute-0 podman[74077]: 2026-01-23 09:47:39.914116114 +0000 UTC m=+0.193485203 container init 0ef8d6e7cf3dad07ffc452382175b1a1f72a9dbff263c20fd2beca65ec9662a8 (image=quay.io/ceph/ceph:v19, name=magical_raman, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 09:47:39 compute-0 podman[74077]: 2026-01-23 09:47:39.919251565 +0000 UTC m=+0.198620604 container start 0ef8d6e7cf3dad07ffc452382175b1a1f72a9dbff263c20fd2beca65ec9662a8 (image=quay.io/ceph/ceph:v19, name=magical_raman, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:39 compute-0 podman[74077]: 2026-01-23 09:47:39.93242433 +0000 UTC m=+0.211793389 container attach 0ef8d6e7cf3dad07ffc452382175b1a1f72a9dbff263c20fd2beca65ec9662a8 (image=quay.io/ceph/ceph:v19, name=magical_raman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:40 compute-0 ceph-mon[73981]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 23 09:47:40 compute-0 ceph-mon[73981]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2879807099' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 09:47:40 compute-0 ceph-mon[73981]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2879807099' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 09:47:40 compute-0 magical_raman[74093]: 
Jan 23 09:47:40 compute-0 magical_raman[74093]: [global]
Jan 23 09:47:40 compute-0 magical_raman[74093]:         fsid = f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:40 compute-0 magical_raman[74093]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 23 09:47:40 compute-0 systemd[1]: libpod-0ef8d6e7cf3dad07ffc452382175b1a1f72a9dbff263c20fd2beca65ec9662a8.scope: Deactivated successfully.
Jan 23 09:47:40 compute-0 podman[74077]: 2026-01-23 09:47:40.134213766 +0000 UTC m=+0.413582835 container died 0ef8d6e7cf3dad07ffc452382175b1a1f72a9dbff263c20fd2beca65ec9662a8 (image=quay.io/ceph/ceph:v19, name=magical_raman, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-24fc5416818650db440d6d919d76011dfb1a206c3897deff1f2882bb75188196-merged.mount: Deactivated successfully.
Jan 23 09:47:40 compute-0 podman[74077]: 2026-01-23 09:47:40.280793397 +0000 UTC m=+0.560162436 container remove 0ef8d6e7cf3dad07ffc452382175b1a1f72a9dbff263c20fd2beca65ec9662a8 (image=quay.io/ceph/ceph:v19, name=magical_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:40 compute-0 systemd[1]: libpod-conmon-0ef8d6e7cf3dad07ffc452382175b1a1f72a9dbff263c20fd2beca65ec9662a8.scope: Deactivated successfully.
Jan 23 09:47:40 compute-0 podman[74129]: 2026-01-23 09:47:40.323592329 +0000 UTC m=+0.022511920 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:40 compute-0 podman[74129]: 2026-01-23 09:47:40.425816871 +0000 UTC m=+0.124736442 container create f04b1f34cef8fe8a667465c381709ea554cb042868004d3cc2130016b5a923c5 (image=quay.io/ceph/ceph:v19, name=bold_fermi, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:47:40 compute-0 systemd[1]: Started libpod-conmon-f04b1f34cef8fe8a667465c381709ea554cb042868004d3cc2130016b5a923c5.scope.
Jan 23 09:47:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f360db484d12ba153b86f0944898a61d665390f5a73dd6b9390eb990e32d5c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f360db484d12ba153b86f0944898a61d665390f5a73dd6b9390eb990e32d5c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f360db484d12ba153b86f0944898a61d665390f5a73dd6b9390eb990e32d5c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f360db484d12ba153b86f0944898a61d665390f5a73dd6b9390eb990e32d5c4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:40 compute-0 podman[74129]: 2026-01-23 09:47:40.510033936 +0000 UTC m=+0.208953527 container init f04b1f34cef8fe8a667465c381709ea554cb042868004d3cc2130016b5a923c5 (image=quay.io/ceph/ceph:v19, name=bold_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 09:47:40 compute-0 podman[74129]: 2026-01-23 09:47:40.515080824 +0000 UTC m=+0.214000395 container start f04b1f34cef8fe8a667465c381709ea554cb042868004d3cc2130016b5a923c5 (image=quay.io/ceph/ceph:v19, name=bold_fermi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:40 compute-0 podman[74129]: 2026-01-23 09:47:40.521904443 +0000 UTC m=+0.220824044 container attach f04b1f34cef8fe8a667465c381709ea554cb042868004d3cc2130016b5a923c5 (image=quay.io/ceph/ceph:v19, name=bold_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:47:40 compute-0 ceph-mon[73981]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:47:40 compute-0 ceph-mon[73981]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591874054' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:47:40 compute-0 systemd[1]: libpod-f04b1f34cef8fe8a667465c381709ea554cb042868004d3cc2130016b5a923c5.scope: Deactivated successfully.
Jan 23 09:47:40 compute-0 podman[74172]: 2026-01-23 09:47:40.767948625 +0000 UTC m=+0.022234582 container died f04b1f34cef8fe8a667465c381709ea554cb042868004d3cc2130016b5a923c5 (image=quay.io/ceph/ceph:v19, name=bold_fermi, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 09:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f360db484d12ba153b86f0944898a61d665390f5a73dd6b9390eb990e32d5c4-merged.mount: Deactivated successfully.
Jan 23 09:47:40 compute-0 ceph-mon[73981]: from='client.? 192.168.122.100:0/2879807099' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 09:47:40 compute-0 ceph-mon[73981]: from='client.? 192.168.122.100:0/2879807099' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 09:47:40 compute-0 ceph-mon[73981]: from='client.? 192.168.122.100:0/2591874054' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:47:40 compute-0 podman[74172]: 2026-01-23 09:47:40.835871203 +0000 UTC m=+0.090157120 container remove f04b1f34cef8fe8a667465c381709ea554cb042868004d3cc2130016b5a923c5 (image=quay.io/ceph/ceph:v19, name=bold_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:47:40 compute-0 systemd[1]: libpod-conmon-f04b1f34cef8fe8a667465c381709ea554cb042868004d3cc2130016b5a923c5.scope: Deactivated successfully.
Jan 23 09:47:40 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:47:41 compute-0 ceph-mon[73981]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 23 09:47:41 compute-0 ceph-mon[73981]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 23 09:47:41 compute-0 ceph-mon[73981]: mon.compute-0@0(leader) e1 shutdown
Jan 23 09:47:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0[73977]: 2026-01-23T09:47:41.147+0000 7f25be43c640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 23 09:47:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0[73977]: 2026-01-23T09:47:41.147+0000 7f25be43c640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 23 09:47:41 compute-0 ceph-mon[73981]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 23 09:47:41 compute-0 ceph-mon[73981]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 23 09:47:41 compute-0 podman[74215]: 2026-01-23 09:47:41.414075656 +0000 UTC m=+0.418185161 container died 21e4f1d69f673838392ca8ce580f7f79e14823c7c8a8422ad5c6d4e9aaef08c6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 23 09:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a3b947d518cd8007e49dec742f61e94a15342d02a7f579960ba38a202c662ad-merged.mount: Deactivated successfully.
Jan 23 09:47:41 compute-0 podman[74215]: 2026-01-23 09:47:41.447993999 +0000 UTC m=+0.452103514 container remove 21e4f1d69f673838392ca8ce580f7f79e14823c7c8a8422ad5c6d4e9aaef08c6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:47:41 compute-0 bash[74215]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0
Jan 23 09:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 09:47:41 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@mon.compute-0.service: Deactivated successfully.
Jan 23 09:47:41 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:47:41 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:47:41 compute-0 podman[74315]: 2026-01-23 09:47:41.832814762 +0000 UTC m=+0.108606979 container create cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 09:47:41 compute-0 podman[74315]: 2026-01-23 09:47:41.74629436 +0000 UTC m=+0.022086597 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5ce19beb3067af916dd9233a0d589cc81624dc4a3b24baf129b085879854bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5ce19beb3067af916dd9233a0d589cc81624dc4a3b24baf129b085879854bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5ce19beb3067af916dd9233a0d589cc81624dc4a3b24baf129b085879854bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5ce19beb3067af916dd9233a0d589cc81624dc4a3b24baf129b085879854bc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:41 compute-0 podman[74315]: 2026-01-23 09:47:41.952293679 +0000 UTC m=+0.228085916 container init cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 09:47:41 compute-0 podman[74315]: 2026-01-23 09:47:41.958106379 +0000 UTC m=+0.233898596 container start cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:47:41 compute-0 bash[74315]: cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6
Jan 23 09:47:41 compute-0 systemd[1]: Started Ceph mon.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:47:41 compute-0 ceph-mon[74335]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 09:47:41 compute-0 ceph-mon[74335]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 23 09:47:41 compute-0 ceph-mon[74335]: pidfile_write: ignore empty --pid-file
Jan 23 09:47:41 compute-0 ceph-mon[74335]: load: jerasure load: lrc 
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: RocksDB version: 7.9.2
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Git sha 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: DB SUMMARY
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: DB Session ID:  H0542XX9TGHHLXC3GFH0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: CURRENT file:  CURRENT
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60153 ; 
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                         Options.error_if_exists: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                       Options.create_if_missing: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                                     Options.env: 0x5569dbe2cc20
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                                Options.info_log: 0x5569ddb53ac0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                              Options.statistics: (nil)
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                               Options.use_fsync: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                              Options.db_log_dir: 
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                                 Options.wal_dir: 
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                    Options.write_buffer_manager: 0x5569ddb57900
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.unordered_write: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                               Options.row_cache: None
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                              Options.wal_filter: None
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.two_write_queues: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.wal_compression: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.atomic_flush: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.max_background_jobs: 2
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.max_background_compactions: -1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.max_subcompactions: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.max_total_wal_size: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                          Options.max_open_files: -1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:       Options.compaction_readahead_size: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Compression algorithms supported:
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         kZSTD supported: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         kXpressCompression supported: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         kBZip2Compression supported: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         kLZ4Compression supported: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         kZlibCompression supported: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         kSnappyCompression supported: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:           Options.merge_operator: 
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:        Options.compaction_filter: None
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5569ddb52aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5569ddb77350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:        Options.write_buffer_size: 33554432
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:  Options.max_write_buffer_number: 2
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:          Options.compression: NoCompression
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.num_levels: 7
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dfd65f37-5d13-4bd7-9c84-01e95a04d6c8
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161662002865, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161662007000, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 144, "table_properties": {"data_size": 58231, "index_size": 187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3238, "raw_average_key_size": 30, "raw_value_size": 55699, "raw_average_value_size": 520, "num_data_blocks": 9, "num_entries": 107, "num_filter_entries": 107, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161662, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161662007114, "job": 1, "event": "recovery_finished"}
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5569ddb78e00
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: DB pointer 0x5569ddc82000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 09:47:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.27 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   60.27 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 4.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 4.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5569ddb77350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.38 KB,7.15256e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 09:47:42 compute-0 ceph-mon[74335]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@-1(???) e1 preinit fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@-1(???).mds e1 new map
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-23T09:47:38:565964+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 23 09:47:42 compute-0 ceph-mon[74335]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : last_changed 2026-01-23T09:47:35.499222+0000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : created 2026-01-23T09:47:35.499222+0000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 23 09:47:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 23 09:47:42 compute-0 podman[74336]: 2026-01-23 09:47:42.035203276 +0000 UTC m=+0.039346313 container create e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850 (image=quay.io/ceph/ceph:v19, name=priceless_blackwell, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:42 compute-0 systemd[1]: Started libpod-conmon-e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850.scope.
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 09:47:42 compute-0 ceph-mon[74335]: monmap epoch 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:42 compute-0 ceph-mon[74335]: last_changed 2026-01-23T09:47:35.499222+0000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: created 2026-01-23T09:47:35.499222+0000
Jan 23 09:47:42 compute-0 ceph-mon[74335]: min_mon_release 19 (squid)
Jan 23 09:47:42 compute-0 ceph-mon[74335]: election_strategy: 1
Jan 23 09:47:42 compute-0 ceph-mon[74335]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 09:47:42 compute-0 ceph-mon[74335]: fsmap 
Jan 23 09:47:42 compute-0 ceph-mon[74335]: osdmap e1: 0 total, 0 up, 0 in
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mgrmap e1: no daemons active
Jan 23 09:47:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf83b860a9e1311a6702547b43dc696817dbb7f7c2bc8ad563aeced385e4278/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf83b860a9e1311a6702547b43dc696817dbb7f7c2bc8ad563aeced385e4278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf83b860a9e1311a6702547b43dc696817dbb7f7c2bc8ad563aeced385e4278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:42 compute-0 podman[74336]: 2026-01-23 09:47:42.110005415 +0000 UTC m=+0.114148482 container init e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850 (image=quay.io/ceph/ceph:v19, name=priceless_blackwell, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:42 compute-0 podman[74336]: 2026-01-23 09:47:42.019181007 +0000 UTC m=+0.023324064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:42 compute-0 podman[74336]: 2026-01-23 09:47:42.116864236 +0000 UTC m=+0.121007273 container start e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850 (image=quay.io/ceph/ceph:v19, name=priceless_blackwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 09:47:42 compute-0 podman[74336]: 2026-01-23 09:47:42.120182773 +0000 UTC m=+0.124325810 container attach e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850 (image=quay.io/ceph/ceph:v19, name=priceless_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 23 09:47:42 compute-0 systemd[1]: libpod-e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850.scope: Deactivated successfully.
Jan 23 09:47:42 compute-0 conmon[74390]: conmon e2001edfb0158bd771bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850.scope/container/memory.events
Jan 23 09:47:42 compute-0 podman[74416]: 2026-01-23 09:47:42.383624124 +0000 UTC m=+0.027416464 container died e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850 (image=quay.io/ceph/ceph:v19, name=priceless_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:47:42 compute-0 podman[74416]: 2026-01-23 09:47:42.441618861 +0000 UTC m=+0.085411181 container remove e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850 (image=quay.io/ceph/ceph:v19, name=priceless_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:42 compute-0 systemd[1]: libpod-conmon-e2001edfb0158bd771bfa082d930ca004d3ec727a6c90503f551b568f228c850.scope: Deactivated successfully.
Jan 23 09:47:42 compute-0 podman[74431]: 2026-01-23 09:47:42.514049401 +0000 UTC m=+0.044353569 container create a0967078f42118b5f4d18864c1e7a1dd630148c53eb11ff9026f9e7e9e892233 (image=quay.io/ceph/ceph:v19, name=mystifying_black, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:47:42 compute-0 systemd[1]: Started libpod-conmon-a0967078f42118b5f4d18864c1e7a1dd630148c53eb11ff9026f9e7e9e892233.scope.
Jan 23 09:47:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c79720a8c39773610104519676608125d732761366ff4f6babbdef54bd024a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c79720a8c39773610104519676608125d732761366ff4f6babbdef54bd024a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c79720a8c39773610104519676608125d732761366ff4f6babbdef54bd024a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:42 compute-0 podman[74431]: 2026-01-23 09:47:42.493404487 +0000 UTC m=+0.023708685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:42 compute-0 podman[74431]: 2026-01-23 09:47:42.597156304 +0000 UTC m=+0.127460502 container init a0967078f42118b5f4d18864c1e7a1dd630148c53eb11ff9026f9e7e9e892233 (image=quay.io/ceph/ceph:v19, name=mystifying_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 09:47:42 compute-0 podman[74431]: 2026-01-23 09:47:42.605383474 +0000 UTC m=+0.135687632 container start a0967078f42118b5f4d18864c1e7a1dd630148c53eb11ff9026f9e7e9e892233 (image=quay.io/ceph/ceph:v19, name=mystifying_black, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 23 09:47:42 compute-0 podman[74431]: 2026-01-23 09:47:42.608942918 +0000 UTC m=+0.139247086 container attach a0967078f42118b5f4d18864c1e7a1dd630148c53eb11ff9026f9e7e9e892233 (image=quay.io/ceph/ceph:v19, name=mystifying_black, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 09:47:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 23 09:47:42 compute-0 systemd[1]: libpod-a0967078f42118b5f4d18864c1e7a1dd630148c53eb11ff9026f9e7e9e892233.scope: Deactivated successfully.
Jan 23 09:47:42 compute-0 podman[74431]: 2026-01-23 09:47:42.814157185 +0000 UTC m=+0.344461353 container died a0967078f42118b5f4d18864c1e7a1dd630148c53eb11ff9026f9e7e9e892233 (image=quay.io/ceph/ceph:v19, name=mystifying_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 09:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-44c79720a8c39773610104519676608125d732761366ff4f6babbdef54bd024a-merged.mount: Deactivated successfully.
Jan 23 09:47:43 compute-0 podman[74431]: 2026-01-23 09:47:43.862676963 +0000 UTC m=+1.392981131 container remove a0967078f42118b5f4d18864c1e7a1dd630148c53eb11ff9026f9e7e9e892233 (image=quay.io/ceph/ceph:v19, name=mystifying_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:47:43 compute-0 systemd[1]: libpod-conmon-a0967078f42118b5f4d18864c1e7a1dd630148c53eb11ff9026f9e7e9e892233.scope: Deactivated successfully.
Jan 23 09:47:44 compute-0 systemd[1]: Reloading.
Jan 23 09:47:44 compute-0 systemd-rc-local-generator[74515]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:47:44 compute-0 systemd-sysv-generator[74518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:47:45 compute-0 systemd[1]: Reloading.
Jan 23 09:47:45 compute-0 systemd-rc-local-generator[74555]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:47:45 compute-0 systemd-sysv-generator[74558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:47:45 compute-0 systemd[1]: Starting Ceph mgr.compute-0.nbdygh for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:47:45 compute-0 podman[74613]: 2026-01-23 09:47:45.619180763 +0000 UTC m=+0.036702135 container create e4a1c45f747e69af65041011d00875cfaaf16149f31875bd1585747dd24058b3 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2500fc458994a630ad6b62e3e8adcc094e3bd5300632239ebceb55ebf49962e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2500fc458994a630ad6b62e3e8adcc094e3bd5300632239ebceb55ebf49962e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2500fc458994a630ad6b62e3e8adcc094e3bd5300632239ebceb55ebf49962e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2500fc458994a630ad6b62e3e8adcc094e3bd5300632239ebceb55ebf49962e3/merged/var/lib/ceph/mgr/ceph-compute-0.nbdygh supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:45 compute-0 podman[74613]: 2026-01-23 09:47:45.678901761 +0000 UTC m=+0.096423173 container init e4a1c45f747e69af65041011d00875cfaaf16149f31875bd1585747dd24058b3 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 09:47:45 compute-0 podman[74613]: 2026-01-23 09:47:45.683518496 +0000 UTC m=+0.101039858 container start e4a1c45f747e69af65041011d00875cfaaf16149f31875bd1585747dd24058b3 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:47:45 compute-0 bash[74613]: e4a1c45f747e69af65041011d00875cfaaf16149f31875bd1585747dd24058b3
Jan 23 09:47:45 compute-0 podman[74613]: 2026-01-23 09:47:45.601531786 +0000 UTC m=+0.019053178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:45 compute-0 systemd[1]: Started Ceph mgr.compute-0.nbdygh for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:47:45 compute-0 ceph-mgr[74633]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 09:47:45 compute-0 ceph-mgr[74633]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 23 09:47:45 compute-0 ceph-mgr[74633]: pidfile_write: ignore empty --pid-file
Jan 23 09:47:45 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'alerts'
Jan 23 09:47:45 compute-0 podman[74634]: 2026-01-23 09:47:45.762399615 +0000 UTC m=+0.046151482 container create 18596c1105b6221c3ad00a0abf0f25ac0134c959ad0d291f3fbb2c5e49f69551 (image=quay.io/ceph/ceph:v19, name=nervous_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 23 09:47:45 compute-0 systemd[1]: Started libpod-conmon-18596c1105b6221c3ad00a0abf0f25ac0134c959ad0d291f3fbb2c5e49f69551.scope.
Jan 23 09:47:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5722966e9f9c1ca89ea66a042405c3b290b50d219ad701d42720abdcd91c39cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5722966e9f9c1ca89ea66a042405c3b290b50d219ad701d42720abdcd91c39cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5722966e9f9c1ca89ea66a042405c3b290b50d219ad701d42720abdcd91c39cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:45 compute-0 podman[74634]: 2026-01-23 09:47:45.743512452 +0000 UTC m=+0.027264339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:45 compute-0 podman[74634]: 2026-01-23 09:47:45.847441404 +0000 UTC m=+0.131193291 container init 18596c1105b6221c3ad00a0abf0f25ac0134c959ad0d291f3fbb2c5e49f69551 (image=quay.io/ceph/ceph:v19, name=nervous_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 09:47:45 compute-0 podman[74634]: 2026-01-23 09:47:45.855261323 +0000 UTC m=+0.139013190 container start 18596c1105b6221c3ad00a0abf0f25ac0134c959ad0d291f3fbb2c5e49f69551 (image=quay.io/ceph/ceph:v19, name=nervous_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:45 compute-0 podman[74634]: 2026-01-23 09:47:45.858393114 +0000 UTC m=+0.142144981 container attach 18596c1105b6221c3ad00a0abf0f25ac0134c959ad0d291f3fbb2c5e49f69551 (image=quay.io/ceph/ceph:v19, name=nervous_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:47:45 compute-0 ceph-mgr[74633]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:47:45 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'balancer'
Jan 23 09:47:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:45.863+0000 7fa005d64140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:47:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:45.957+0000 7fa005d64140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:47:45 compute-0 ceph-mgr[74633]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:47:45 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'cephadm'
Jan 23 09:47:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 09:47:46 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1915636740' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 09:47:46 compute-0 nervous_wilson[74670]: 
Jan 23 09:47:46 compute-0 nervous_wilson[74670]: {
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "health": {
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "status": "HEALTH_OK",
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "checks": {},
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "mutes": []
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     },
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "election_epoch": 5,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "quorum": [
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         0
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     ],
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "quorum_names": [
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "compute-0"
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     ],
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "quorum_age": 4,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "monmap": {
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "epoch": 1,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "min_mon_release_name": "squid",
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "num_mons": 1
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     },
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "osdmap": {
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "epoch": 1,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "num_osds": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "num_up_osds": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "osd_up_since": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "num_in_osds": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "osd_in_since": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "num_remapped_pgs": 0
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     },
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "pgmap": {
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "pgs_by_state": [],
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "num_pgs": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "num_pools": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "num_objects": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "data_bytes": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "bytes_used": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "bytes_avail": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "bytes_total": 0
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     },
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "fsmap": {
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "epoch": 1,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "btime": "2026-01-23T09:47:38:565964+0000",
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "by_rank": [],
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "up:standby": 0
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     },
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "mgrmap": {
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "available": false,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "num_standbys": 0,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "modules": [
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:             "iostat",
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:             "nfs",
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:             "restful"
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         ],
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "services": {}
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     },
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "servicemap": {
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "epoch": 1,
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "modified": "2026-01-23T09:47:38.571725+0000",
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:         "services": {}
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     },
Jan 23 09:47:46 compute-0 nervous_wilson[74670]:     "progress_events": {}
Jan 23 09:47:46 compute-0 nervous_wilson[74670]: }
Jan 23 09:47:46 compute-0 systemd[1]: libpod-18596c1105b6221c3ad00a0abf0f25ac0134c959ad0d291f3fbb2c5e49f69551.scope: Deactivated successfully.
Jan 23 09:47:46 compute-0 podman[74634]: 2026-01-23 09:47:46.082882725 +0000 UTC m=+0.366634642 container died 18596c1105b6221c3ad00a0abf0f25ac0134c959ad0d291f3fbb2c5e49f69551 (image=quay.io/ceph/ceph:v19, name=nervous_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 09:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5722966e9f9c1ca89ea66a042405c3b290b50d219ad701d42720abdcd91c39cd-merged.mount: Deactivated successfully.
Jan 23 09:47:46 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1915636740' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 09:47:46 compute-0 podman[74634]: 2026-01-23 09:47:46.120577158 +0000 UTC m=+0.404329025 container remove 18596c1105b6221c3ad00a0abf0f25ac0134c959ad0d291f3fbb2c5e49f69551 (image=quay.io/ceph/ceph:v19, name=nervous_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:46 compute-0 systemd[1]: libpod-conmon-18596c1105b6221c3ad00a0abf0f25ac0134c959ad0d291f3fbb2c5e49f69551.scope: Deactivated successfully.
Jan 23 09:47:46 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'crash'
Jan 23 09:47:46 compute-0 ceph-mgr[74633]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:47:46 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'dashboard'
Jan 23 09:47:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:46.908+0000 7fa005d64140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:47:47 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'devicehealth'
Jan 23 09:47:47 compute-0 ceph-mgr[74633]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:47:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:47.599+0000 7fa005d64140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:47:47 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 09:47:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 09:47:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 09:47:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   from numpy import show_config as show_numpy_config
Jan 23 09:47:47 compute-0 ceph-mgr[74633]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:47:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:47.787+0000 7fa005d64140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:47:47 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'influx'
Jan 23 09:47:47 compute-0 ceph-mgr[74633]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:47:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:47.865+0000 7fa005d64140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:47:47 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'insights'
Jan 23 09:47:47 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'iostat'
Jan 23 09:47:48 compute-0 ceph-mgr[74633]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:47:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:48.010+0000 7fa005d64140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:47:48 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'k8sevents'
Jan 23 09:47:48 compute-0 podman[74720]: 2026-01-23 09:47:48.171380042 +0000 UTC m=+0.023371136 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:49 compute-0 podman[74720]: 2026-01-23 09:47:49.027579311 +0000 UTC m=+0.879570395 container create 0388978c2e6a0c18959672165fde91b16e9fa5b8114424e06223256368da71b1 (image=quay.io/ceph/ceph:v19, name=sweet_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 09:47:49 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'localpool'
Jan 23 09:47:49 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 09:47:49 compute-0 systemd[1]: Started libpod-conmon-0388978c2e6a0c18959672165fde91b16e9fa5b8114424e06223256368da71b1.scope.
Jan 23 09:47:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85600b114cec037834ec42528908ea1b4417857132753c6099e79f51166e8b20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85600b114cec037834ec42528908ea1b4417857132753c6099e79f51166e8b20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85600b114cec037834ec42528908ea1b4417857132753c6099e79f51166e8b20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:49 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mirroring'
Jan 23 09:47:49 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'nfs'
Jan 23 09:47:49 compute-0 podman[74720]: 2026-01-23 09:47:49.66265145 +0000 UTC m=+1.514642564 container init 0388978c2e6a0c18959672165fde91b16e9fa5b8114424e06223256368da71b1 (image=quay.io/ceph/ceph:v19, name=sweet_booth, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:47:49 compute-0 podman[74720]: 2026-01-23 09:47:49.668370157 +0000 UTC m=+1.520361241 container start 0388978c2e6a0c18959672165fde91b16e9fa5b8114424e06223256368da71b1 (image=quay.io/ceph/ceph:v19, name=sweet_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 09:47:49 compute-0 podman[74720]: 2026-01-23 09:47:49.690862225 +0000 UTC m=+1.542853339 container attach 0388978c2e6a0c18959672165fde91b16e9fa5b8114424e06223256368da71b1 (image=quay.io/ceph/ceph:v19, name=sweet_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:49 compute-0 ceph-mgr[74633]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:47:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:49.773+0000 7fa005d64140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:47:49 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'orchestrator'
Jan 23 09:47:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 09:47:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3259691013' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 09:47:49 compute-0 sweet_booth[74736]: 
Jan 23 09:47:49 compute-0 sweet_booth[74736]: {
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "health": {
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "status": "HEALTH_OK",
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "checks": {},
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "mutes": []
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     },
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "election_epoch": 5,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "quorum": [
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         0
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     ],
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "quorum_names": [
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "compute-0"
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     ],
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "quorum_age": 7,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "monmap": {
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "epoch": 1,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "min_mon_release_name": "squid",
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "num_mons": 1
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     },
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "osdmap": {
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "epoch": 1,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "num_osds": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "num_up_osds": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "osd_up_since": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "num_in_osds": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "osd_in_since": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "num_remapped_pgs": 0
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     },
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "pgmap": {
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "pgs_by_state": [],
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "num_pgs": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "num_pools": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "num_objects": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "data_bytes": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "bytes_used": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "bytes_avail": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "bytes_total": 0
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     },
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "fsmap": {
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "epoch": 1,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "btime": "2026-01-23T09:47:38:565964+0000",
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "by_rank": [],
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "up:standby": 0
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     },
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "mgrmap": {
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "available": false,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "num_standbys": 0,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "modules": [
Jan 23 09:47:49 compute-0 sweet_booth[74736]:             "iostat",
Jan 23 09:47:49 compute-0 sweet_booth[74736]:             "nfs",
Jan 23 09:47:49 compute-0 sweet_booth[74736]:             "restful"
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         ],
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "services": {}
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     },
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "servicemap": {
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "epoch": 1,
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "modified": "2026-01-23T09:47:38.571725+0000",
Jan 23 09:47:49 compute-0 sweet_booth[74736]:         "services": {}
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     },
Jan 23 09:47:49 compute-0 sweet_booth[74736]:     "progress_events": {}
Jan 23 09:47:49 compute-0 sweet_booth[74736]: }
Jan 23 09:47:49 compute-0 systemd[1]: libpod-0388978c2e6a0c18959672165fde91b16e9fa5b8114424e06223256368da71b1.scope: Deactivated successfully.
Jan 23 09:47:49 compute-0 podman[74720]: 2026-01-23 09:47:49.905451566 +0000 UTC m=+1.757442650 container died 0388978c2e6a0c18959672165fde91b16e9fa5b8114424e06223256368da71b1 (image=quay.io/ceph/ceph:v19, name=sweet_booth, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-85600b114cec037834ec42528908ea1b4417857132753c6099e79f51166e8b20-merged.mount: Deactivated successfully.
Jan 23 09:47:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3259691013' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 09:47:49 compute-0 podman[74720]: 2026-01-23 09:47:49.943957133 +0000 UTC m=+1.795948217 container remove 0388978c2e6a0c18959672165fde91b16e9fa5b8114424e06223256368da71b1 (image=quay.io/ceph/ceph:v19, name=sweet_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 09:47:49 compute-0 systemd[1]: libpod-conmon-0388978c2e6a0c18959672165fde91b16e9fa5b8114424e06223256368da71b1.scope: Deactivated successfully.
Jan 23 09:47:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:50.049+0000 7fa005d64140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:50.140+0000 7fa005d64140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_support'
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:50.216+0000 7fa005d64140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:50.303+0000 7fa005d64140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'progress'
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:50.384+0000 7fa005d64140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'prometheus'
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:50.780+0000 7fa005d64140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rbd_support'
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:47:50 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'restful'
Jan 23 09:47:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:50.893+0000 7fa005d64140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:47:51 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rgw'
Jan 23 09:47:51 compute-0 ceph-mgr[74633]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:47:51 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rook'
Jan 23 09:47:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:51.365+0000 7fa005d64140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'selftest'
Jan 23 09:47:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:52.005+0000 7fa005d64140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 podman[74774]: 2026-01-23 09:47:51.989533833 +0000 UTC m=+0.021226572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'snap_schedule'
Jan 23 09:47:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:52.090+0000 7fa005d64140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'stats'
Jan 23 09:47:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:52.182+0000 7fa005d64140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'status'
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:52.339+0000 7fa005d64140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telegraf'
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:52.411+0000 7fa005d64140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telemetry'
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:52.577+0000 7fa005d64140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:47:52 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'volumes'
Jan 23 09:47:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:52.822+0000 7fa005d64140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:47:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:53.125+0000 7fa005d64140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'zabbix'
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:47:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:47:53.202+0000 7fa005d64140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: ms_deliver_dispatch: unhandled message 0x56315c66e9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.nbdygh
Jan 23 09:47:53 compute-0 podman[74774]: 2026-01-23 09:47:53.346631254 +0000 UTC m=+1.378323973 container create e68d08f460dca311ab40e327e73112aed9172e8c4164ea7d8112f55ef02762ba (image=quay.io/ceph/ceph:v19, name=boring_liskov, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr handle_mgr_map Activating!
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.nbdygh(active, starting, since 0.142817s)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr handle_mgr_map I am now activating
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e1 all = 1
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"} v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Manager daemon compute-0.nbdygh is now available
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: balancer
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [balancer INFO root] Starting
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: crash
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:47:53
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [balancer INFO root] No pools available
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: devicehealth
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Starting
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: iostat
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: nfs
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: orchestrator
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: pg_autoscaler
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: progress
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [progress INFO root] Loading...
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [progress INFO root] No stored events to load
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded [] historic events
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded OSDMap, ready.
Jan 23 09:47:53 compute-0 systemd[1]: Started libpod-conmon-e68d08f460dca311ab40e327e73112aed9172e8c4164ea7d8112f55ef02762ba.scope.
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support INFO root] recovery thread starting
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support INFO root] starting setup
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: rbd_support
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: restful
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: status
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [restful INFO root] server_addr: :: server_port: 8003
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [restful WARNING root] server not running: no certificate configured
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: telemetry
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"} v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' 
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support INFO root] PerfHandler: starting
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TaskHandler: starting
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"} v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:47:53 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:53 compute-0 ceph-mon[74335]: Activating manager daemon compute-0.nbdygh
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mgrmap e2: compute-0.nbdygh(active, starting, since 0.142817s)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:47:53 compute-0 ceph-mon[74335]: Manager daemon compute-0.nbdygh is now available
Jan 23 09:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/496b1bd429b22440a083a9b04cfd7ce1464184bdc583ff67de2f1df8de6eeda4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/496b1bd429b22440a083a9b04cfd7ce1464184bdc583ff67de2f1df8de6eeda4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/496b1bd429b22440a083a9b04cfd7ce1464184bdc583ff67de2f1df8de6eeda4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' 
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: [rbd_support INFO root] setup complete
Jan 23 09:47:53 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: volumes
Jan 23 09:47:53 compute-0 podman[74774]: 2026-01-23 09:47:53.422224316 +0000 UTC m=+1.453917055 container init e68d08f460dca311ab40e327e73112aed9172e8c4164ea7d8112f55ef02762ba (image=quay.io/ceph/ceph:v19, name=boring_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' 
Jan 23 09:47:53 compute-0 podman[74774]: 2026-01-23 09:47:53.4281552 +0000 UTC m=+1.459847919 container start e68d08f460dca311ab40e327e73112aed9172e8c4164ea7d8112f55ef02762ba (image=quay.io/ceph/ceph:v19, name=boring_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 09:47:53 compute-0 podman[74774]: 2026-01-23 09:47:53.439415609 +0000 UTC m=+1.471108318 container attach e68d08f460dca311ab40e327e73112aed9172e8c4164ea7d8112f55ef02762ba (image=quay.io/ceph/ceph:v19, name=boring_liskov, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 09:47:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680988595' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 09:47:53 compute-0 boring_liskov[74825]: 
Jan 23 09:47:53 compute-0 boring_liskov[74825]: {
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "health": {
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "status": "HEALTH_OK",
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "checks": {},
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "mutes": []
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     },
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "election_epoch": 5,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "quorum": [
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         0
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     ],
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "quorum_names": [
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "compute-0"
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     ],
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "quorum_age": 11,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "monmap": {
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "epoch": 1,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "min_mon_release_name": "squid",
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "num_mons": 1
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     },
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "osdmap": {
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "epoch": 1,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "num_osds": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "num_up_osds": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "osd_up_since": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "num_in_osds": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "osd_in_since": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "num_remapped_pgs": 0
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     },
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "pgmap": {
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "pgs_by_state": [],
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "num_pgs": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "num_pools": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "num_objects": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "data_bytes": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "bytes_used": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "bytes_avail": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "bytes_total": 0
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     },
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "fsmap": {
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "epoch": 1,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "btime": "2026-01-23T09:47:38:565964+0000",
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "by_rank": [],
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "up:standby": 0
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     },
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "mgrmap": {
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "available": false,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "num_standbys": 0,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "modules": [
Jan 23 09:47:53 compute-0 boring_liskov[74825]:             "iostat",
Jan 23 09:47:53 compute-0 boring_liskov[74825]:             "nfs",
Jan 23 09:47:53 compute-0 boring_liskov[74825]:             "restful"
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         ],
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "services": {}
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     },
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "servicemap": {
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "epoch": 1,
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "modified": "2026-01-23T09:47:38.571725+0000",
Jan 23 09:47:53 compute-0 boring_liskov[74825]:         "services": {}
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     },
Jan 23 09:47:53 compute-0 boring_liskov[74825]:     "progress_events": {}
Jan 23 09:47:53 compute-0 boring_liskov[74825]: }
Jan 23 09:47:53 compute-0 systemd[1]: libpod-e68d08f460dca311ab40e327e73112aed9172e8c4164ea7d8112f55ef02762ba.scope: Deactivated successfully.
Jan 23 09:47:53 compute-0 podman[74896]: 2026-01-23 09:47:53.671321947 +0000 UTC m=+0.026144586 container died e68d08f460dca311ab40e327e73112aed9172e8c4164ea7d8112f55ef02762ba (image=quay.io/ceph/ceph:v19, name=boring_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:47:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-496b1bd429b22440a083a9b04cfd7ce1464184bdc583ff67de2f1df8de6eeda4-merged.mount: Deactivated successfully.
Jan 23 09:47:54 compute-0 podman[74896]: 2026-01-23 09:47:54.154420596 +0000 UTC m=+0.509243205 container remove e68d08f460dca311ab40e327e73112aed9172e8c4164ea7d8112f55ef02762ba (image=quay.io/ceph/ceph:v19, name=boring_liskov, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:47:54 compute-0 systemd[1]: libpod-conmon-e68d08f460dca311ab40e327e73112aed9172e8c4164ea7d8112f55ef02762ba.scope: Deactivated successfully.
Jan 23 09:47:54 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.nbdygh(active, since 1.1622s)
Jan 23 09:47:54 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:47:54 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' 
Jan 23 09:47:54 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:47:54 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' 
Jan 23 09:47:54 compute-0 ceph-mon[74335]: from='mgr.14102 192.168.122.100:0/2924830344' entity='mgr.compute-0.nbdygh' 
Jan 23 09:47:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1680988595' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 09:47:54 compute-0 ceph-mon[74335]: mgrmap e3: compute-0.nbdygh(active, since 1.1622s)
Jan 23 09:47:55 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:47:55 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.nbdygh(active, since 2s)
Jan 23 09:47:56 compute-0 podman[74911]: 2026-01-23 09:47:56.201744068 +0000 UTC m=+0.020487721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:57 compute-0 ceph-mon[74335]: mgrmap e4: compute-0.nbdygh(active, since 2s)
Jan 23 09:47:57 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:47:57 compute-0 podman[74911]: 2026-01-23 09:47:57.495792432 +0000 UTC m=+1.314536065 container create ec629c70a838aec0e10ad91e941d10f411d40c76be06e1ce2555404c5cd32c4e (image=quay.io/ceph/ceph:v19, name=loving_lamport, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 09:47:57 compute-0 systemd[1]: Started libpod-conmon-ec629c70a838aec0e10ad91e941d10f411d40c76be06e1ce2555404c5cd32c4e.scope.
Jan 23 09:47:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e30cee6147c0e74a40c67ae50a0a30df4c78d351b361c452abb59a7ffb6fea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e30cee6147c0e74a40c67ae50a0a30df4c78d351b361c452abb59a7ffb6fea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e30cee6147c0e74a40c67ae50a0a30df4c78d351b361c452abb59a7ffb6fea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:57 compute-0 podman[74911]: 2026-01-23 09:47:57.567834841 +0000 UTC m=+1.386578484 container init ec629c70a838aec0e10ad91e941d10f411d40c76be06e1ce2555404c5cd32c4e (image=quay.io/ceph/ceph:v19, name=loving_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 09:47:57 compute-0 podman[74911]: 2026-01-23 09:47:57.573222429 +0000 UTC m=+1.391966062 container start ec629c70a838aec0e10ad91e941d10f411d40c76be06e1ce2555404c5cd32c4e (image=quay.io/ceph/ceph:v19, name=loving_lamport, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Jan 23 09:47:57 compute-0 podman[74911]: 2026-01-23 09:47:57.576575157 +0000 UTC m=+1.395318820 container attach ec629c70a838aec0e10ad91e941d10f411d40c76be06e1ce2555404c5cd32c4e (image=quay.io/ceph/ceph:v19, name=loving_lamport, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 09:47:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 09:47:58 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1551796460' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 09:47:58 compute-0 loving_lamport[74928]: 
Jan 23 09:47:58 compute-0 loving_lamport[74928]: {
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "health": {
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "status": "HEALTH_OK",
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "checks": {},
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "mutes": []
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     },
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "election_epoch": 5,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "quorum": [
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         0
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     ],
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "quorum_names": [
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "compute-0"
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     ],
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "quorum_age": 15,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "monmap": {
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "epoch": 1,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "min_mon_release_name": "squid",
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "num_mons": 1
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     },
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "osdmap": {
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "epoch": 1,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "num_osds": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "num_up_osds": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "osd_up_since": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "num_in_osds": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "osd_in_since": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "num_remapped_pgs": 0
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     },
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "pgmap": {
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "pgs_by_state": [],
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "num_pgs": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "num_pools": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "num_objects": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "data_bytes": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "bytes_used": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "bytes_avail": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "bytes_total": 0
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     },
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "fsmap": {
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "epoch": 1,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "btime": "2026-01-23T09:47:38:565964+0000",
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "by_rank": [],
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "up:standby": 0
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     },
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "mgrmap": {
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "available": true,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "num_standbys": 0,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "modules": [
Jan 23 09:47:58 compute-0 loving_lamport[74928]:             "iostat",
Jan 23 09:47:58 compute-0 loving_lamport[74928]:             "nfs",
Jan 23 09:47:58 compute-0 loving_lamport[74928]:             "restful"
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         ],
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "services": {}
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     },
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "servicemap": {
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "epoch": 1,
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "modified": "2026-01-23T09:47:38.571725+0000",
Jan 23 09:47:58 compute-0 loving_lamport[74928]:         "services": {}
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     },
Jan 23 09:47:58 compute-0 loving_lamport[74928]:     "progress_events": {}
Jan 23 09:47:58 compute-0 loving_lamport[74928]: }
Jan 23 09:47:58 compute-0 systemd[1]: libpod-ec629c70a838aec0e10ad91e941d10f411d40c76be06e1ce2555404c5cd32c4e.scope: Deactivated successfully.
Jan 23 09:47:58 compute-0 podman[74911]: 2026-01-23 09:47:58.034810538 +0000 UTC m=+1.853554171 container died ec629c70a838aec0e10ad91e941d10f411d40c76be06e1ce2555404c5cd32c4e (image=quay.io/ceph/ceph:v19, name=loving_lamport, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-36e30cee6147c0e74a40c67ae50a0a30df4c78d351b361c452abb59a7ffb6fea-merged.mount: Deactivated successfully.
Jan 23 09:47:58 compute-0 podman[74911]: 2026-01-23 09:47:58.073892382 +0000 UTC m=+1.892636015 container remove ec629c70a838aec0e10ad91e941d10f411d40c76be06e1ce2555404c5cd32c4e (image=quay.io/ceph/ceph:v19, name=loving_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:58 compute-0 systemd[1]: libpod-conmon-ec629c70a838aec0e10ad91e941d10f411d40c76be06e1ce2555404c5cd32c4e.scope: Deactivated successfully.
Jan 23 09:47:58 compute-0 podman[74965]: 2026-01-23 09:47:58.210343315 +0000 UTC m=+0.036032005 container create 7f3a705913278cdb6ff73d9d4012159e09505d7f5770fd8b75e9a60c7c7cc1c7 (image=quay.io/ceph/ceph:v19, name=boring_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:47:58 compute-0 systemd[1]: Started libpod-conmon-7f3a705913278cdb6ff73d9d4012159e09505d7f5770fd8b75e9a60c7c7cc1c7.scope.
Jan 23 09:47:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1a910e37a1fd4826325229acede37d719c537adc663fcf064eabddcd3ff7e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1a910e37a1fd4826325229acede37d719c537adc663fcf064eabddcd3ff7e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1a910e37a1fd4826325229acede37d719c537adc663fcf064eabddcd3ff7e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1a910e37a1fd4826325229acede37d719c537adc663fcf064eabddcd3ff7e1/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:58 compute-0 podman[74965]: 2026-01-23 09:47:58.272395812 +0000 UTC m=+0.098084522 container init 7f3a705913278cdb6ff73d9d4012159e09505d7f5770fd8b75e9a60c7c7cc1c7 (image=quay.io/ceph/ceph:v19, name=boring_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 09:47:58 compute-0 podman[74965]: 2026-01-23 09:47:58.277227943 +0000 UTC m=+0.102916633 container start 7f3a705913278cdb6ff73d9d4012159e09505d7f5770fd8b75e9a60c7c7cc1c7 (image=quay.io/ceph/ceph:v19, name=boring_hopper, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:47:58 compute-0 podman[74965]: 2026-01-23 09:47:58.280962982 +0000 UTC m=+0.106651672 container attach 7f3a705913278cdb6ff73d9d4012159e09505d7f5770fd8b75e9a60c7c7cc1c7 (image=quay.io/ceph/ceph:v19, name=boring_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:47:58 compute-0 podman[74965]: 2026-01-23 09:47:58.195917733 +0000 UTC m=+0.021606443 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1551796460' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 09:47:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 23 09:47:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/104436834' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 09:47:58 compute-0 boring_hopper[74981]: 
Jan 23 09:47:58 compute-0 boring_hopper[74981]: [global]
Jan 23 09:47:58 compute-0 boring_hopper[74981]:         fsid = f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:47:58 compute-0 boring_hopper[74981]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 23 09:47:58 compute-0 systemd[1]: libpod-7f3a705913278cdb6ff73d9d4012159e09505d7f5770fd8b75e9a60c7c7cc1c7.scope: Deactivated successfully.
Jan 23 09:47:58 compute-0 podman[74965]: 2026-01-23 09:47:58.977507629 +0000 UTC m=+0.803196319 container died 7f3a705913278cdb6ff73d9d4012159e09505d7f5770fd8b75e9a60c7c7cc1c7 (image=quay.io/ceph/ceph:v19, name=boring_hopper, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec1a910e37a1fd4826325229acede37d719c537adc663fcf064eabddcd3ff7e1-merged.mount: Deactivated successfully.
Jan 23 09:47:59 compute-0 podman[74965]: 2026-01-23 09:47:59.01342848 +0000 UTC m=+0.839117170 container remove 7f3a705913278cdb6ff73d9d4012159e09505d7f5770fd8b75e9a60c7c7cc1c7 (image=quay.io/ceph/ceph:v19, name=boring_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:47:59 compute-0 systemd[1]: libpod-conmon-7f3a705913278cdb6ff73d9d4012159e09505d7f5770fd8b75e9a60c7c7cc1c7.scope: Deactivated successfully.
Jan 23 09:47:59 compute-0 podman[75017]: 2026-01-23 09:47:59.077506716 +0000 UTC m=+0.042025401 container create b42dd6799894aef962250c86f3774cb82dbc83f9937a0c2d17126587c12d357d (image=quay.io/ceph/ceph:v19, name=affectionate_pike, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 09:47:59 compute-0 systemd[1]: Started libpod-conmon-b42dd6799894aef962250c86f3774cb82dbc83f9937a0c2d17126587c12d357d.scope.
Jan 23 09:47:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47debc181b7449b639f6cbe221758ceaaa6b165d936da36e9f76d1e7e01947b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47debc181b7449b639f6cbe221758ceaaa6b165d936da36e9f76d1e7e01947b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47debc181b7449b639f6cbe221758ceaaa6b165d936da36e9f76d1e7e01947b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:47:59 compute-0 podman[75017]: 2026-01-23 09:47:59.142662893 +0000 UTC m=+0.107181598 container init b42dd6799894aef962250c86f3774cb82dbc83f9937a0c2d17126587c12d357d (image=quay.io/ceph/ceph:v19, name=affectionate_pike, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 09:47:59 compute-0 podman[75017]: 2026-01-23 09:47:59.148000849 +0000 UTC m=+0.112519534 container start b42dd6799894aef962250c86f3774cb82dbc83f9937a0c2d17126587c12d357d (image=quay.io/ceph/ceph:v19, name=affectionate_pike, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 09:47:59 compute-0 podman[75017]: 2026-01-23 09:47:59.151594994 +0000 UTC m=+0.116113679 container attach b42dd6799894aef962250c86f3774cb82dbc83f9937a0c2d17126587c12d357d (image=quay.io/ceph/ceph:v19, name=affectionate_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 09:47:59 compute-0 podman[75017]: 2026-01-23 09:47:59.060431506 +0000 UTC m=+0.024950211 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:47:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/104436834' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 09:47:59 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:47:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 23 09:47:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/432416217' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 23 09:48:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/432416217' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:01 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/432416217' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  1: '-n'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  2: 'mgr.compute-0.nbdygh'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  3: '-f'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  4: '--setuser'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  5: 'ceph'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  6: '--setgroup'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  7: 'ceph'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  8: '--default-log-to-file=false'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  9: '--default-log-to-journald=true'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr respawn  exe_path /proc/self/exe
Jan 23 09:48:01 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.nbdygh(active, since 8s)
Jan 23 09:48:01 compute-0 systemd[1]: libpod-b42dd6799894aef962250c86f3774cb82dbc83f9937a0c2d17126587c12d357d.scope: Deactivated successfully.
Jan 23 09:48:01 compute-0 podman[75017]: 2026-01-23 09:48:01.482833635 +0000 UTC m=+2.447352320 container died b42dd6799894aef962250c86f3774cb82dbc83f9937a0c2d17126587c12d357d (image=quay.io/ceph/ceph:v19, name=affectionate_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 09:48:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setuser ceph since I am not root
Jan 23 09:48:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setgroup ceph since I am not root
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: pidfile_write: ignore empty --pid-file
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'alerts'
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'balancer'
Jan 23 09:48:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:01.687+0000 7fa5cb84f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:48:01 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'cephadm'
Jan 23 09:48:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:01.782+0000 7fa5cb84f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-47debc181b7449b639f6cbe221758ceaaa6b165d936da36e9f76d1e7e01947b7-merged.mount: Deactivated successfully.
Jan 23 09:48:01 compute-0 podman[75017]: 2026-01-23 09:48:01.971302542 +0000 UTC m=+2.935821277 container remove b42dd6799894aef962250c86f3774cb82dbc83f9937a0c2d17126587c12d357d (image=quay.io/ceph/ceph:v19, name=affectionate_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 09:48:02 compute-0 podman[75090]: 2026-01-23 09:48:02.045066791 +0000 UTC m=+0.049684905 container create ec85182edd80931a939e915896d386fd6fce923be1958dab0786e09f591dc180 (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:02 compute-0 systemd[1]: Started libpod-conmon-ec85182edd80931a939e915896d386fd6fce923be1958dab0786e09f591dc180.scope.
Jan 23 09:48:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b6712bb752fdddae0f094a053f5b5d617c4fd76b05f050b148b9ec05b0ed94/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b6712bb752fdddae0f094a053f5b5d617c4fd76b05f050b148b9ec05b0ed94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b6712bb752fdddae0f094a053f5b5d617c4fd76b05f050b148b9ec05b0ed94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:02 compute-0 podman[75090]: 2026-01-23 09:48:02.017252197 +0000 UTC m=+0.021870331 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:02 compute-0 podman[75090]: 2026-01-23 09:48:02.253613425 +0000 UTC m=+0.258231559 container init ec85182edd80931a939e915896d386fd6fce923be1958dab0786e09f591dc180 (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:48:02 compute-0 podman[75090]: 2026-01-23 09:48:02.258633312 +0000 UTC m=+0.263251426 container start ec85182edd80931a939e915896d386fd6fce923be1958dab0786e09f591dc180 (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:02 compute-0 podman[75090]: 2026-01-23 09:48:02.366268352 +0000 UTC m=+0.370886466 container attach ec85182edd80931a939e915896d386fd6fce923be1958dab0786e09f591dc180 (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:02 compute-0 systemd[1]: libpod-conmon-b42dd6799894aef962250c86f3774cb82dbc83f9937a0c2d17126587c12d357d.scope: Deactivated successfully.
Jan 23 09:48:02 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/432416217' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 23 09:48:02 compute-0 ceph-mon[74335]: mgrmap e5: compute-0.nbdygh(active, since 8s)
Jan 23 09:48:02 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'crash'
Jan 23 09:48:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 23 09:48:02 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/151132801' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 09:48:02 compute-0 interesting_mcclintock[75106]: {
Jan 23 09:48:02 compute-0 interesting_mcclintock[75106]:     "epoch": 5,
Jan 23 09:48:02 compute-0 interesting_mcclintock[75106]:     "available": true,
Jan 23 09:48:02 compute-0 interesting_mcclintock[75106]:     "active_name": "compute-0.nbdygh",
Jan 23 09:48:02 compute-0 interesting_mcclintock[75106]:     "num_standby": 0
Jan 23 09:48:02 compute-0 interesting_mcclintock[75106]: }
Jan 23 09:48:02 compute-0 ceph-mgr[74633]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:48:02 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'dashboard'
Jan 23 09:48:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:02.728+0000 7fa5cb84f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:48:02 compute-0 systemd[1]: libpod-ec85182edd80931a939e915896d386fd6fce923be1958dab0786e09f591dc180.scope: Deactivated successfully.
Jan 23 09:48:02 compute-0 podman[75090]: 2026-01-23 09:48:02.734144769 +0000 UTC m=+0.738762933 container died ec85182edd80931a939e915896d386fd6fce923be1958dab0786e09f591dc180 (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 23 09:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-71b6712bb752fdddae0f094a053f5b5d617c4fd76b05f050b148b9ec05b0ed94-merged.mount: Deactivated successfully.
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'devicehealth'
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 09:48:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:03.431+0000 7fa5cb84f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:48:03 compute-0 podman[75090]: 2026-01-23 09:48:03.442951815 +0000 UTC m=+1.447569929 container remove ec85182edd80931a939e915896d386fd6fce923be1958dab0786e09f591dc180 (image=quay.io/ceph/ceph:v19, name=interesting_mcclintock, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 09:48:03 compute-0 systemd[1]: libpod-conmon-ec85182edd80931a939e915896d386fd6fce923be1958dab0786e09f591dc180.scope: Deactivated successfully.
Jan 23 09:48:03 compute-0 podman[75155]: 2026-01-23 09:48:03.493036631 +0000 UTC m=+0.026308121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 09:48:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 09:48:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   from numpy import show_config as show_numpy_config
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'influx'
Jan 23 09:48:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:03.625+0000 7fa5cb84f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'insights'
Jan 23 09:48:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:03.711+0000 7fa5cb84f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:48:03 compute-0 podman[75155]: 2026-01-23 09:48:03.713496524 +0000 UTC m=+0.246767994 container create 8c19243d1c2827fd819ce836cb8b68310bf8034b8b339aa095d5af2bc105591c (image=quay.io/ceph/ceph:v19, name=great_hellman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/151132801' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 09:48:03 compute-0 systemd[1]: Started libpod-conmon-8c19243d1c2827fd819ce836cb8b68310bf8034b8b339aa095d5af2bc105591c.scope.
Jan 23 09:48:03 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08da98ee749b5c5a1898d71f93c4b1d47d28d645faab8e732c2b91c20d4c6b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08da98ee749b5c5a1898d71f93c4b1d47d28d645faab8e732c2b91c20d4c6b1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08da98ee749b5c5a1898d71f93c4b1d47d28d645faab8e732c2b91c20d4c6b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'iostat'
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:48:03 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'k8sevents'
Jan 23 09:48:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:03.874+0000 7fa5cb84f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:48:03 compute-0 podman[75155]: 2026-01-23 09:48:03.951020576 +0000 UTC m=+0.484292076 container init 8c19243d1c2827fd819ce836cb8b68310bf8034b8b339aa095d5af2bc105591c (image=quay.io/ceph/ceph:v19, name=great_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Jan 23 09:48:03 compute-0 podman[75155]: 2026-01-23 09:48:03.958288228 +0000 UTC m=+0.491559708 container start 8c19243d1c2827fd819ce836cb8b68310bf8034b8b339aa095d5af2bc105591c (image=quay.io/ceph/ceph:v19, name=great_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 09:48:03 compute-0 podman[75155]: 2026-01-23 09:48:03.963389418 +0000 UTC m=+0.496660918 container attach 8c19243d1c2827fd819ce836cb8b68310bf8034b8b339aa095d5af2bc105591c (image=quay.io/ceph/ceph:v19, name=great_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 09:48:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'localpool'
Jan 23 09:48:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 09:48:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mirroring'
Jan 23 09:48:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'nfs'
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'orchestrator'
Jan 23 09:48:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:05.057+0000 7fa5cb84f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:05.314+0000 7fa5cb84f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_support'
Jan 23 09:48:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:05.402+0000 7fa5cb84f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 09:48:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:05.483+0000 7fa5cb84f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'progress'
Jan 23 09:48:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:05.573+0000 7fa5cb84f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:48:05 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'prometheus'
Jan 23 09:48:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:05.653+0000 7fa5cb84f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:48:06 compute-0 ceph-mgr[74633]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:48:06 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rbd_support'
Jan 23 09:48:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:06.040+0000 7fa5cb84f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:48:06 compute-0 ceph-mgr[74633]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:48:06 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'restful'
Jan 23 09:48:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:06.145+0000 7fa5cb84f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:48:06 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rgw'
Jan 23 09:48:06 compute-0 ceph-mgr[74633]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:48:06 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rook'
Jan 23 09:48:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:06.615+0000 7fa5cb84f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'selftest'
Jan 23 09:48:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:07.236+0000 7fa5cb84f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'snap_schedule'
Jan 23 09:48:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:07.316+0000 7fa5cb84f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'stats'
Jan 23 09:48:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:07.408+0000 7fa5cb84f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'status'
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telegraf'
Jan 23 09:48:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:07.576+0000 7fa5cb84f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telemetry'
Jan 23 09:48:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:07.653+0000 7fa5cb84f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:48:07 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 09:48:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:07.831+0000 7fa5cb84f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'volumes'
Jan 23 09:48:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:08.063+0000 7fa5cb84f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'zabbix'
Jan 23 09:48:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:08.340+0000 7fa5cb84f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:48:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:48:08.420+0000 7fa5cb84f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Active manager daemon compute-0.nbdygh restarted
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.nbdygh
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: ms_deliver_dispatch: unhandled message 0x55dda719ad00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr handle_mgr_map Activating!
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.nbdygh(active, starting, since 0.0831487s)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr handle_mgr_map I am now activating
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"} v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e1 all = 1
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: balancer
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [balancer INFO root] Starting
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Manager daemon compute-0.nbdygh is now available
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:48:08
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [balancer INFO root] No pools available
Jan 23 09:48:08 compute-0 ceph-mon[74335]: Active manager daemon compute-0.nbdygh restarted
Jan 23 09:48:08 compute-0 ceph-mon[74335]: Activating manager daemon compute-0.nbdygh
Jan 23 09:48:08 compute-0 ceph-mon[74335]: osdmap e2: 0 total, 0 up, 0 in
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mgrmap e6: compute-0.nbdygh(active, starting, since 0.0831487s)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: cephadm
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: crash
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: devicehealth
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Starting
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: iostat
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: nfs
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: orchestrator
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: pg_autoscaler
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: progress
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [progress INFO root] Loading...
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [progress INFO root] No stored events to load
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded [] historic events
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded OSDMap, ready.
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] recovery thread starting
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] starting setup
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: rbd_support
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: restful
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: status
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [restful INFO root] server_addr: :: server_port: 8003
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [restful WARNING root] server not running: no certificate configured
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: telemetry
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"} v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] PerfHandler: starting
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TaskHandler: starting
Jan 23 09:48:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"} v 0)
Jan 23 09:48:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] setup complete
Jan 23 09:48:08 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: volumes
Jan 23 09:48:09 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.nbdygh(active, since 1.17708s)
Jan 23 09:48:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 23 09:48:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 23 09:48:09 compute-0 great_hellman[75172]: {
Jan 23 09:48:09 compute-0 great_hellman[75172]:     "mgrmap_epoch": 7,
Jan 23 09:48:09 compute-0 great_hellman[75172]:     "initialized": true
Jan 23 09:48:09 compute-0 great_hellman[75172]: }
Jan 23 09:48:09 compute-0 systemd[1]: libpod-8c19243d1c2827fd819ce836cb8b68310bf8034b8b339aa095d5af2bc105591c.scope: Deactivated successfully.
Jan 23 09:48:09 compute-0 podman[75155]: 2026-01-23 09:48:09.629131954 +0000 UTC m=+6.162403424 container died 8c19243d1c2827fd819ce836cb8b68310bf8034b8b339aa095d5af2bc105591c (image=quay.io/ceph/ceph:v19, name=great_hellman, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 09:48:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Jan 23 09:48:09 compute-0 ceph-mon[74335]: Manager daemon compute-0.nbdygh is now available
Jan 23 09:48:09 compute-0 ceph-mon[74335]: Found migration_current of "None". Setting to last migration.
Jan 23 09:48:09 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:09 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:09 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:09 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:09 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:48:09 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:48:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e08da98ee749b5c5a1898d71f93c4b1d47d28d645faab8e732c2b91c20d4c6b1-merged.mount: Deactivated successfully.
Jan 23 09:48:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Jan 23 09:48:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:10 compute-0 podman[75155]: 2026-01-23 09:48:10.109776801 +0000 UTC m=+6.643048271 container remove 8c19243d1c2827fd819ce836cb8b68310bf8034b8b339aa095d5af2bc105591c (image=quay.io/ceph/ceph:v19, name=great_hellman, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 23 09:48:10 compute-0 podman[75321]: 2026-01-23 09:48:10.193069559 +0000 UTC m=+0.062924912 container create d09f0678784d68d5447b41fba9c1cdd3e3c870d088e7f43683fb93d7bdd0fb56 (image=quay.io/ceph/ceph:v19, name=agitated_mclean, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 09:48:10 compute-0 systemd[1]: libpod-conmon-8c19243d1c2827fd819ce836cb8b68310bf8034b8b339aa095d5af2bc105591c.scope: Deactivated successfully.
Jan 23 09:48:10 compute-0 systemd[1]: Started libpod-conmon-d09f0678784d68d5447b41fba9c1cdd3e3c870d088e7f43683fb93d7bdd0fb56.scope.
Jan 23 09:48:10 compute-0 podman[75321]: 2026-01-23 09:48:10.152420649 +0000 UTC m=+0.022276032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51d077c1683c47f7761768901436383080e2990910dac624ecbc465f5e15338/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51d077c1683c47f7761768901436383080e2990910dac624ecbc465f5e15338/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51d077c1683c47f7761768901436383080e2990910dac624ecbc465f5e15338/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:10 compute-0 podman[75321]: 2026-01-23 09:48:10.278030606 +0000 UTC m=+0.147885989 container init d09f0678784d68d5447b41fba9c1cdd3e3c870d088e7f43683fb93d7bdd0fb56 (image=quay.io/ceph/ceph:v19, name=agitated_mclean, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:48:10 compute-0 podman[75321]: 2026-01-23 09:48:10.283414654 +0000 UTC m=+0.153270007 container start d09f0678784d68d5447b41fba9c1cdd3e3c870d088e7f43683fb93d7bdd0fb56 (image=quay.io/ceph/ceph:v19, name=agitated_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 09:48:10 compute-0 podman[75321]: 2026-01-23 09:48:10.28671715 +0000 UTC m=+0.156572523 container attach d09f0678784d68d5447b41fba9c1cdd3e3c870d088e7f43683fb93d7bdd0fb56 (image=quay.io/ceph/ceph:v19, name=agitated_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 09:48:10 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 23 09:48:10 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:48:10] ENGINE Bus STARTING
Jan 23 09:48:10 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:48:10] ENGINE Bus STARTING
Jan 23 09:48:11 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:48:11] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:48:11 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:48:11] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:48:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:11 compute-0 systemd[1]: libpod-d09f0678784d68d5447b41fba9c1cdd3e3c870d088e7f43683fb93d7bdd0fb56.scope: Deactivated successfully.
Jan 23 09:48:11 compute-0 podman[75321]: 2026-01-23 09:48:11.177570224 +0000 UTC m=+1.047425577 container died d09f0678784d68d5447b41fba9c1cdd3e3c870d088e7f43683fb93d7bdd0fb56 (image=quay.io/ceph/ceph:v19, name=agitated_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 09:48:11 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:48:11] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:48:11 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:48:11] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:48:11 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:48:11] ENGINE Bus STARTED
Jan 23 09:48:11 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:48:11] ENGINE Bus STARTED
Jan 23 09:48:11 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:48:11] ENGINE Client ('192.168.122.100', 58980) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:48:11 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:48:11] ENGINE Client ('192.168.122.100', 58980) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:48:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 09:48:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 09:48:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019917943 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:12 compute-0 ceph-mon[74335]: mgrmap e7: compute-0.nbdygh(active, since 1.17708s)
Jan 23 09:48:12 compute-0 ceph-mon[74335]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 23 09:48:12 compute-0 ceph-mon[74335]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 23 09:48:12 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:12 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:12 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.nbdygh(active, since 3s)
Jan 23 09:48:12 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a51d077c1683c47f7761768901436383080e2990910dac624ecbc465f5e15338-merged.mount: Deactivated successfully.
Jan 23 09:48:13 compute-0 podman[75321]: 2026-01-23 09:48:13.087837743 +0000 UTC m=+2.957693106 container remove d09f0678784d68d5447b41fba9c1cdd3e3c870d088e7f43683fb93d7bdd0fb56 (image=quay.io/ceph/ceph:v19, name=agitated_mclean, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:13 compute-0 podman[75399]: 2026-01-23 09:48:13.148407736 +0000 UTC m=+0.041029092 container create 19e46f28962014667b3288067325150178d8765883dae9a84e0cd5b4e68fe3ab (image=quay.io/ceph/ceph:v19, name=silly_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 09:48:13 compute-0 systemd[1]: Started libpod-conmon-19e46f28962014667b3288067325150178d8765883dae9a84e0cd5b4e68fe3ab.scope.
Jan 23 09:48:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc2bb1a21cfbcba83eae5e86b5dd1c10448f26f6804b20c56c59a9b729138a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc2bb1a21cfbcba83eae5e86b5dd1c10448f26f6804b20c56c59a9b729138a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc2bb1a21cfbcba83eae5e86b5dd1c10448f26f6804b20c56c59a9b729138a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:13 compute-0 podman[75399]: 2026-01-23 09:48:13.12908674 +0000 UTC m=+0.021708126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:13 compute-0 ceph-mon[74335]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:13 compute-0 ceph-mon[74335]: [23/Jan/2026:09:48:10] ENGINE Bus STARTING
Jan 23 09:48:13 compute-0 ceph-mon[74335]: [23/Jan/2026:09:48:11] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:48:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:13 compute-0 ceph-mon[74335]: [23/Jan/2026:09:48:11] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:48:13 compute-0 ceph-mon[74335]: [23/Jan/2026:09:48:11] ENGINE Bus STARTED
Jan 23 09:48:13 compute-0 ceph-mon[74335]: [23/Jan/2026:09:48:11] ENGINE Client ('192.168.122.100', 58980) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:48:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:13 compute-0 ceph-mon[74335]: mgrmap e8: compute-0.nbdygh(active, since 3s)
Jan 23 09:48:13 compute-0 podman[75399]: 2026-01-23 09:48:13.344884287 +0000 UTC m=+0.237505663 container init 19e46f28962014667b3288067325150178d8765883dae9a84e0cd5b4e68fe3ab (image=quay.io/ceph/ceph:v19, name=silly_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:48:13 compute-0 podman[75399]: 2026-01-23 09:48:13.350623745 +0000 UTC m=+0.243245101 container start 19e46f28962014667b3288067325150178d8765883dae9a84e0cd5b4e68fe3ab (image=quay.io/ceph/ceph:v19, name=silly_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:48:13 compute-0 podman[75399]: 2026-01-23 09:48:13.385262519 +0000 UTC m=+0.277883885 container attach 19e46f28962014667b3288067325150178d8765883dae9a84e0cd5b4e68fe3ab (image=quay.io/ceph/ceph:v19, name=silly_antonelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:48:13 compute-0 systemd[1]: libpod-conmon-d09f0678784d68d5447b41fba9c1cdd3e3c870d088e7f43683fb93d7bdd0fb56.scope: Deactivated successfully.
Jan 23 09:48:13 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 23 09:48:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:13 compute-0 ceph-mgr[74633]: [cephadm INFO root] Set ssh ssh_user
Jan 23 09:48:13 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 23 09:48:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 23 09:48:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:13 compute-0 ceph-mgr[74633]: [cephadm INFO root] Set ssh ssh_config
Jan 23 09:48:13 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 23 09:48:13 compute-0 ceph-mgr[74633]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 23 09:48:13 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 23 09:48:13 compute-0 silly_antonelli[75415]: ssh user set to ceph-admin. sudo will be used
Jan 23 09:48:13 compute-0 systemd[1]: libpod-19e46f28962014667b3288067325150178d8765883dae9a84e0cd5b4e68fe3ab.scope: Deactivated successfully.
Jan 23 09:48:13 compute-0 podman[75443]: 2026-01-23 09:48:13.79463288 +0000 UTC m=+0.024537899 container died 19e46f28962014667b3288067325150178d8765883dae9a84e0cd5b4e68fe3ab (image=quay.io/ceph/ceph:v19, name=silly_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 09:48:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bc2bb1a21cfbcba83eae5e86b5dd1c10448f26f6804b20c56c59a9b729138a2-merged.mount: Deactivated successfully.
Jan 23 09:48:14 compute-0 podman[75443]: 2026-01-23 09:48:14.058821283 +0000 UTC m=+0.288726312 container remove 19e46f28962014667b3288067325150178d8765883dae9a84e0cd5b4e68fe3ab (image=quay.io/ceph/ceph:v19, name=silly_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 23 09:48:14 compute-0 systemd[1]: libpod-conmon-19e46f28962014667b3288067325150178d8765883dae9a84e0cd5b4e68fe3ab.scope: Deactivated successfully.
Jan 23 09:48:14 compute-0 podman[75458]: 2026-01-23 09:48:14.130764929 +0000 UTC m=+0.046556634 container create e7263747b05ab180c925c94fbbc67517ce18b39e407e8a17702c2d0258cc2cdd (image=quay.io/ceph/ceph:v19, name=lucid_lovelace, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:48:14 compute-0 systemd[1]: Started libpod-conmon-e7263747b05ab180c925c94fbbc67517ce18b39e407e8a17702c2d0258cc2cdd.scope.
Jan 23 09:48:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d441545764cb0729cb4b084b5053528ead15efa580bf71f79e01eee1c7a0ce75/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d441545764cb0729cb4b084b5053528ead15efa580bf71f79e01eee1c7a0ce75/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d441545764cb0729cb4b084b5053528ead15efa580bf71f79e01eee1c7a0ce75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d441545764cb0729cb4b084b5053528ead15efa580bf71f79e01eee1c7a0ce75/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d441545764cb0729cb4b084b5053528ead15efa580bf71f79e01eee1c7a0ce75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:14 compute-0 podman[75458]: 2026-01-23 09:48:14.193721571 +0000 UTC m=+0.109513316 container init e7263747b05ab180c925c94fbbc67517ce18b39e407e8a17702c2d0258cc2cdd (image=quay.io/ceph/ceph:v19, name=lucid_lovelace, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:14 compute-0 podman[75458]: 2026-01-23 09:48:14.106933801 +0000 UTC m=+0.022725536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:14 compute-0 podman[75458]: 2026-01-23 09:48:14.20254055 +0000 UTC m=+0.118332265 container start e7263747b05ab180c925c94fbbc67517ce18b39e407e8a17702c2d0258cc2cdd (image=quay.io/ceph/ceph:v19, name=lucid_lovelace, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 09:48:14 compute-0 podman[75458]: 2026-01-23 09:48:14.206126494 +0000 UTC m=+0.121918199 container attach e7263747b05ab180c925c94fbbc67517ce18b39e407e8a17702c2d0258cc2cdd (image=quay.io/ceph/ceph:v19, name=lucid_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 09:48:14 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:14 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 23 09:48:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:15 compute-0 ceph-mgr[74633]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 23 09:48:15 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 23 09:48:15 compute-0 ceph-mgr[74633]: [cephadm INFO root] Set ssh private key
Jan 23 09:48:15 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 23 09:48:15 compute-0 systemd[1]: libpod-e7263747b05ab180c925c94fbbc67517ce18b39e407e8a17702c2d0258cc2cdd.scope: Deactivated successfully.
Jan 23 09:48:15 compute-0 podman[75458]: 2026-01-23 09:48:15.051416205 +0000 UTC m=+0.967207920 container died e7263747b05ab180c925c94fbbc67517ce18b39e407e8a17702c2d0258cc2cdd (image=quay.io/ceph/ceph:v19, name=lucid_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 09:48:15 compute-0 ceph-mon[74335]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:15 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:15 compute-0 ceph-mon[74335]: Set ssh ssh_user
Jan 23 09:48:15 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:15 compute-0 ceph-mon[74335]: Set ssh ssh_config
Jan 23 09:48:15 compute-0 ceph-mon[74335]: ssh user set to ceph-admin. sudo will be used
Jan 23 09:48:15 compute-0 ceph-mon[74335]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d441545764cb0729cb4b084b5053528ead15efa580bf71f79e01eee1c7a0ce75-merged.mount: Deactivated successfully.
Jan 23 09:48:16 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:16 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:16 compute-0 ceph-mon[74335]: Set ssh ssh_identity_key
Jan 23 09:48:16 compute-0 ceph-mon[74335]: Set ssh private key
Jan 23 09:48:16 compute-0 podman[75458]: 2026-01-23 09:48:16.722184376 +0000 UTC m=+2.637976091 container remove e7263747b05ab180c925c94fbbc67517ce18b39e407e8a17702c2d0258cc2cdd (image=quay.io/ceph/ceph:v19, name=lucid_lovelace, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:48:16 compute-0 podman[75512]: 2026-01-23 09:48:16.830235499 +0000 UTC m=+0.088324277 container create aa561d77289318f777cb3cd1817bda6416640a30f4a59aac38cca6d6a45abf56 (image=quay.io/ceph/ceph:v19, name=eloquent_vaughan, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 09:48:16 compute-0 podman[75512]: 2026-01-23 09:48:16.764059202 +0000 UTC m=+0.022148000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:17 compute-0 systemd[1]: Started libpod-conmon-aa561d77289318f777cb3cd1817bda6416640a30f4a59aac38cca6d6a45abf56.scope.
Jan 23 09:48:17 compute-0 systemd[1]: libpod-conmon-e7263747b05ab180c925c94fbbc67517ce18b39e407e8a17702c2d0258cc2cdd.scope: Deactivated successfully.
Jan 23 09:48:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052991 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef51cd54c1f8d6250b65d35556f4fbc7575149ad44dd37b5a9b100497c7f20/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef51cd54c1f8d6250b65d35556f4fbc7575149ad44dd37b5a9b100497c7f20/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef51cd54c1f8d6250b65d35556f4fbc7575149ad44dd37b5a9b100497c7f20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef51cd54c1f8d6250b65d35556f4fbc7575149ad44dd37b5a9b100497c7f20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef51cd54c1f8d6250b65d35556f4fbc7575149ad44dd37b5a9b100497c7f20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:17 compute-0 podman[75512]: 2026-01-23 09:48:17.148607547 +0000 UTC m=+0.406696345 container init aa561d77289318f777cb3cd1817bda6416640a30f4a59aac38cca6d6a45abf56 (image=quay.io/ceph/ceph:v19, name=eloquent_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:17 compute-0 podman[75512]: 2026-01-23 09:48:17.154251652 +0000 UTC m=+0.412340430 container start aa561d77289318f777cb3cd1817bda6416640a30f4a59aac38cca6d6a45abf56 (image=quay.io/ceph/ceph:v19, name=eloquent_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 09:48:17 compute-0 podman[75512]: 2026-01-23 09:48:17.160483965 +0000 UTC m=+0.418572773 container attach aa561d77289318f777cb3cd1817bda6416640a30f4a59aac38cca6d6a45abf56 (image=quay.io/ceph/ceph:v19, name=eloquent_vaughan, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 23 09:48:17 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 23 09:48:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:17 compute-0 ceph-mgr[74633]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 23 09:48:17 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 23 09:48:17 compute-0 systemd[1]: libpod-aa561d77289318f777cb3cd1817bda6416640a30f4a59aac38cca6d6a45abf56.scope: Deactivated successfully.
Jan 23 09:48:17 compute-0 podman[75512]: 2026-01-23 09:48:17.531300118 +0000 UTC m=+0.789388896 container died aa561d77289318f777cb3cd1817bda6416640a30f4a59aac38cca6d6a45abf56 (image=quay.io/ceph/ceph:v19, name=eloquent_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ef51cd54c1f8d6250b65d35556f4fbc7575149ad44dd37b5a9b100497c7f20-merged.mount: Deactivated successfully.
Jan 23 09:48:17 compute-0 podman[75512]: 2026-01-23 09:48:17.56657246 +0000 UTC m=+0.824661238 container remove aa561d77289318f777cb3cd1817bda6416640a30f4a59aac38cca6d6a45abf56 (image=quay.io/ceph/ceph:v19, name=eloquent_vaughan, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:48:17 compute-0 systemd[1]: libpod-conmon-aa561d77289318f777cb3cd1817bda6416640a30f4a59aac38cca6d6a45abf56.scope: Deactivated successfully.
Jan 23 09:48:17 compute-0 podman[75566]: 2026-01-23 09:48:17.632134299 +0000 UTC m=+0.040382933 container create 112f89710d745bcd6f587117db36fc3697e94935b7010c9a7ba86171ddcd5faf (image=quay.io/ceph/ceph:v19, name=modest_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:17 compute-0 systemd[1]: Started libpod-conmon-112f89710d745bcd6f587117db36fc3697e94935b7010c9a7ba86171ddcd5faf.scope.
Jan 23 09:48:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df633a5682d55338416267057edcd0e61b559123f321e20a3ca4a2edb67b2277/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df633a5682d55338416267057edcd0e61b559123f321e20a3ca4a2edb67b2277/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df633a5682d55338416267057edcd0e61b559123f321e20a3ca4a2edb67b2277/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:17 compute-0 podman[75566]: 2026-01-23 09:48:17.697131942 +0000 UTC m=+0.105380596 container init 112f89710d745bcd6f587117db36fc3697e94935b7010c9a7ba86171ddcd5faf (image=quay.io/ceph/ceph:v19, name=modest_cartwright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 09:48:17 compute-0 podman[75566]: 2026-01-23 09:48:17.702009324 +0000 UTC m=+0.110257958 container start 112f89710d745bcd6f587117db36fc3697e94935b7010c9a7ba86171ddcd5faf (image=quay.io/ceph/ceph:v19, name=modest_cartwright, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:17 compute-0 podman[75566]: 2026-01-23 09:48:17.705147156 +0000 UTC m=+0.113395820 container attach 112f89710d745bcd6f587117db36fc3697e94935b7010c9a7ba86171ddcd5faf (image=quay.io/ceph/ceph:v19, name=modest_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:48:17 compute-0 podman[75566]: 2026-01-23 09:48:17.612239537 +0000 UTC m=+0.020488201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:18 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:18 compute-0 modest_cartwright[75583]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtBegB4NdHsJG8oYi/TvzMVdaEc88IpZQYl9BuObV2owP/sM0dWfIUWSx/oCIs8WXfOC7RlBCxRpIdYUkzDVfTkH5NVc1MRxtPEQrXLryZdHwtHPhcQueVYCeKNiPKaTyXs/v9HSGhrjWmWaVXVKfxxrWo/FtQ2diKWIl8BigoRzy4XlN+iEBQvneRg42paEiE9Zyqi0tKgPZe+dYaPTQppuIN/J6/FIs0FXqTuERCpjiNNBpgjWhRDg57yiy9UzA5FwEZTFMFWcISrybwfTPDTktbfAufwBScZk6DvHQAhdWE8YbBg3YdjQrUxQBNvsaKJodG0uV8obU+nUKCmsBka0NgIdXhagqXcZV4tzc4LioS7G2piiyE/zdrXpcEpR1Pt72t2C8JttYG1D/c6XbjbXM42FpnRJGasVTOubFOjXeABc8D6q+eFyY/NZfklmrYIMvJnrfzBp2XUuHktHlWMeDyqJCsKylw1q8N5nEpuGeSrytPr2opxvjRC6SeLeE= zuul@controller
Jan 23 09:48:18 compute-0 systemd[1]: libpod-112f89710d745bcd6f587117db36fc3697e94935b7010c9a7ba86171ddcd5faf.scope: Deactivated successfully.
Jan 23 09:48:18 compute-0 podman[75566]: 2026-01-23 09:48:18.059106156 +0000 UTC m=+0.467354800 container died 112f89710d745bcd6f587117db36fc3697e94935b7010c9a7ba86171ddcd5faf (image=quay.io/ceph/ceph:v19, name=modest_cartwright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Jan 23 09:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-df633a5682d55338416267057edcd0e61b559123f321e20a3ca4a2edb67b2277-merged.mount: Deactivated successfully.
Jan 23 09:48:18 compute-0 podman[75566]: 2026-01-23 09:48:18.093837453 +0000 UTC m=+0.502086087 container remove 112f89710d745bcd6f587117db36fc3697e94935b7010c9a7ba86171ddcd5faf (image=quay.io/ceph/ceph:v19, name=modest_cartwright, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 09:48:18 compute-0 systemd[1]: libpod-conmon-112f89710d745bcd6f587117db36fc3697e94935b7010c9a7ba86171ddcd5faf.scope: Deactivated successfully.
Jan 23 09:48:18 compute-0 podman[75620]: 2026-01-23 09:48:18.175343908 +0000 UTC m=+0.043416941 container create fdb91845a217afdd2526db6e5914ada48cb9a8179a2e4cc0e5d44994299aa92f (image=quay.io/ceph/ceph:v19, name=nervous_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:48:18 compute-0 systemd[1]: Started libpod-conmon-fdb91845a217afdd2526db6e5914ada48cb9a8179a2e4cc0e5d44994299aa92f.scope.
Jan 23 09:48:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779c7bdb39753e5eb70648c095b6219a3ecc389dae502735455d90e8c6545c26/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779c7bdb39753e5eb70648c095b6219a3ecc389dae502735455d90e8c6545c26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779c7bdb39753e5eb70648c095b6219a3ecc389dae502735455d90e8c6545c26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:18 compute-0 podman[75620]: 2026-01-23 09:48:18.230707379 +0000 UTC m=+0.098780432 container init fdb91845a217afdd2526db6e5914ada48cb9a8179a2e4cc0e5d44994299aa92f (image=quay.io/ceph/ceph:v19, name=nervous_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:18 compute-0 podman[75620]: 2026-01-23 09:48:18.236502718 +0000 UTC m=+0.104575751 container start fdb91845a217afdd2526db6e5914ada48cb9a8179a2e4cc0e5d44994299aa92f (image=quay.io/ceph/ceph:v19, name=nervous_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:48:18 compute-0 podman[75620]: 2026-01-23 09:48:18.239453125 +0000 UTC m=+0.107526158 container attach fdb91845a217afdd2526db6e5914ada48cb9a8179a2e4cc0e5d44994299aa92f (image=quay.io/ceph/ceph:v19, name=nervous_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 09:48:18 compute-0 podman[75620]: 2026-01-23 09:48:18.158236448 +0000 UTC m=+0.026309511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:18 compute-0 ceph-mon[74335]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:18 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:18 compute-0 ceph-mon[74335]: Set ssh ssh_identity_pub
Jan 23 09:48:18 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:18 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:18 compute-0 sshd-session[75664]: Accepted publickey for ceph-admin from 192.168.122.100 port 41980 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:18 compute-0 systemd-logind[784]: New session 21 of user ceph-admin.
Jan 23 09:48:18 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 23 09:48:18 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 23 09:48:18 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 23 09:48:18 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 23 09:48:18 compute-0 systemd[75668]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:18 compute-0 systemd[75668]: Queued start job for default target Main User Target.
Jan 23 09:48:18 compute-0 systemd[75668]: Created slice User Application Slice.
Jan 23 09:48:18 compute-0 systemd[75668]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 09:48:18 compute-0 systemd[75668]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 09:48:18 compute-0 systemd[75668]: Reached target Paths.
Jan 23 09:48:18 compute-0 systemd[75668]: Reached target Timers.
Jan 23 09:48:18 compute-0 systemd[75668]: Starting D-Bus User Message Bus Socket...
Jan 23 09:48:18 compute-0 systemd[75668]: Starting Create User's Volatile Files and Directories...
Jan 23 09:48:18 compute-0 sshd-session[75681]: Accepted publickey for ceph-admin from 192.168.122.100 port 41982 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:18 compute-0 systemd[75668]: Finished Create User's Volatile Files and Directories.
Jan 23 09:48:18 compute-0 systemd[75668]: Listening on D-Bus User Message Bus Socket.
Jan 23 09:48:18 compute-0 systemd[75668]: Reached target Sockets.
Jan 23 09:48:18 compute-0 systemd[75668]: Reached target Basic System.
Jan 23 09:48:18 compute-0 systemd[75668]: Reached target Main User Target.
Jan 23 09:48:18 compute-0 systemd[75668]: Startup finished in 112ms.
Jan 23 09:48:18 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 23 09:48:18 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 23 09:48:18 compute-0 systemd-logind[784]: New session 23 of user ceph-admin.
Jan 23 09:48:18 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 23 09:48:18 compute-0 sshd-session[75664]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:18 compute-0 sshd-session[75681]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:19 compute-0 sudo[75688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:19 compute-0 sudo[75688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:19 compute-0 sudo[75688]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:19 compute-0 sshd-session[75713]: Accepted publickey for ceph-admin from 192.168.122.100 port 41998 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:19 compute-0 systemd-logind[784]: New session 24 of user ceph-admin.
Jan 23 09:48:19 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 23 09:48:19 compute-0 sshd-session[75713]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:19 compute-0 sudo[75717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Jan 23 09:48:19 compute-0 sudo[75717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:19 compute-0 sudo[75717]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:19 compute-0 ceph-mon[74335]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:19 compute-0 sshd-session[75742]: Accepted publickey for ceph-admin from 192.168.122.100 port 42002 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:19 compute-0 systemd-logind[784]: New session 25 of user ceph-admin.
Jan 23 09:48:19 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 23 09:48:19 compute-0 sshd-session[75742]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:19 compute-0 sudo[75746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Jan 23 09:48:19 compute-0 sudo[75746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:19 compute-0 sudo[75746]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:19 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 23 09:48:19 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 23 09:48:19 compute-0 sshd-session[75771]: Accepted publickey for ceph-admin from 192.168.122.100 port 42006 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:19 compute-0 systemd-logind[784]: New session 26 of user ceph-admin.
Jan 23 09:48:19 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 23 09:48:19 compute-0 sshd-session[75771]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:20 compute-0 sudo[75775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:20 compute-0 sudo[75775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:20 compute-0 sudo[75775]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:20 compute-0 sshd-session[75800]: Accepted publickey for ceph-admin from 192.168.122.100 port 42014 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:20 compute-0 systemd-logind[784]: New session 27 of user ceph-admin.
Jan 23 09:48:20 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 23 09:48:20 compute-0 sshd-session[75800]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:20 compute-0 sudo[75804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:20 compute-0 sudo[75804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:20 compute-0 sudo[75804]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:20 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:20 compute-0 ceph-mon[74335]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:20 compute-0 sshd-session[75829]: Accepted publickey for ceph-admin from 192.168.122.100 port 42022 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:20 compute-0 systemd-logind[784]: New session 28 of user ceph-admin.
Jan 23 09:48:20 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 23 09:48:20 compute-0 sshd-session[75829]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:20 compute-0 sudo[75833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Jan 23 09:48:20 compute-0 sudo[75833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:20 compute-0 sudo[75833]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:20 compute-0 sshd-session[75858]: Accepted publickey for ceph-admin from 192.168.122.100 port 42032 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:20 compute-0 systemd-logind[784]: New session 29 of user ceph-admin.
Jan 23 09:48:20 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 23 09:48:20 compute-0 sshd-session[75858]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:20 compute-0 sudo[75862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:20 compute-0 sudo[75862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:20 compute-0 sudo[75862]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:21 compute-0 sshd-session[75887]: Accepted publickey for ceph-admin from 192.168.122.100 port 42038 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:21 compute-0 systemd-logind[784]: New session 30 of user ceph-admin.
Jan 23 09:48:21 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 23 09:48:21 compute-0 sshd-session[75887]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:21 compute-0 sudo[75891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Jan 23 09:48:21 compute-0 sudo[75891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:21 compute-0 sudo[75891]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:21 compute-0 sshd-session[75916]: Accepted publickey for ceph-admin from 192.168.122.100 port 49362 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:21 compute-0 systemd-logind[784]: New session 31 of user ceph-admin.
Jan 23 09:48:21 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 23 09:48:21 compute-0 sshd-session[75916]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:21 compute-0 ceph-mon[74335]: Deploying cephadm binary to compute-0
Jan 23 09:48:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:22 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:22 compute-0 sshd-session[75943]: Accepted publickey for ceph-admin from 192.168.122.100 port 49376 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:22 compute-0 systemd-logind[784]: New session 32 of user ceph-admin.
Jan 23 09:48:22 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 23 09:48:22 compute-0 sshd-session[75943]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:22 compute-0 sudo[75947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Jan 23 09:48:22 compute-0 sudo[75947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:22 compute-0 sudo[75947]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:22 compute-0 sshd-session[75972]: Accepted publickey for ceph-admin from 192.168.122.100 port 49382 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:48:22 compute-0 systemd-logind[784]: New session 33 of user ceph-admin.
Jan 23 09:48:22 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 23 09:48:22 compute-0 sshd-session[75972]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:48:23 compute-0 sudo[75976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Jan 23 09:48:23 compute-0 sudo[75976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:23 compute-0 sudo[75976]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 09:48:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:23 compute-0 ceph-mgr[74633]: [cephadm INFO root] Added host compute-0
Jan 23 09:48:23 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 23 09:48:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 09:48:23 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:23 compute-0 nervous_lalande[75638]: Added host 'compute-0' with addr '192.168.122.100'
Jan 23 09:48:23 compute-0 systemd[1]: libpod-fdb91845a217afdd2526db6e5914ada48cb9a8179a2e4cc0e5d44994299aa92f.scope: Deactivated successfully.
Jan 23 09:48:23 compute-0 podman[75620]: 2026-01-23 09:48:23.507999265 +0000 UTC m=+5.376072318 container died fdb91845a217afdd2526db6e5914ada48cb9a8179a2e4cc0e5d44994299aa92f (image=quay.io/ceph/ceph:v19, name=nervous_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 09:48:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-779c7bdb39753e5eb70648c095b6219a3ecc389dae502735455d90e8c6545c26-merged.mount: Deactivated successfully.
Jan 23 09:48:23 compute-0 sudo[76023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:23 compute-0 sudo[76023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:23 compute-0 sudo[76023]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:23 compute-0 podman[75620]: 2026-01-23 09:48:23.551256431 +0000 UTC m=+5.419329464 container remove fdb91845a217afdd2526db6e5914ada48cb9a8179a2e4cc0e5d44994299aa92f (image=quay.io/ceph/ceph:v19, name=nervous_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:23 compute-0 systemd[1]: libpod-conmon-fdb91845a217afdd2526db6e5914ada48cb9a8179a2e4cc0e5d44994299aa92f.scope: Deactivated successfully.
Jan 23 09:48:23 compute-0 sudo[76060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Jan 23 09:48:23 compute-0 sudo[76060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:23 compute-0 podman[76061]: 2026-01-23 09:48:23.616954914 +0000 UTC m=+0.043233886 container create 21be9d1e422e0966e22cc568a1b40425b00ebdacd7de8c67a2067942670207df (image=quay.io/ceph/ceph:v19, name=nervous_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 09:48:23 compute-0 systemd[1]: Started libpod-conmon-21be9d1e422e0966e22cc568a1b40425b00ebdacd7de8c67a2067942670207df.scope.
Jan 23 09:48:23 compute-0 podman[76061]: 2026-01-23 09:48:23.598259897 +0000 UTC m=+0.024538889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d728855e903d4fd86db85481561c3af2a6f51f290a852f27326b423010269c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d728855e903d4fd86db85481561c3af2a6f51f290a852f27326b423010269c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d728855e903d4fd86db85481561c3af2a6f51f290a852f27326b423010269c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:23 compute-0 podman[76061]: 2026-01-23 09:48:23.724717198 +0000 UTC m=+0.150996200 container init 21be9d1e422e0966e22cc568a1b40425b00ebdacd7de8c67a2067942670207df (image=quay.io/ceph/ceph:v19, name=nervous_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 09:48:23 compute-0 podman[76061]: 2026-01-23 09:48:23.73400622 +0000 UTC m=+0.160285192 container start 21be9d1e422e0966e22cc568a1b40425b00ebdacd7de8c67a2067942670207df (image=quay.io/ceph/ceph:v19, name=nervous_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 09:48:23 compute-0 podman[76061]: 2026-01-23 09:48:23.742560641 +0000 UTC m=+0.168839643 container attach 21be9d1e422e0966e22cc568a1b40425b00ebdacd7de8c67a2067942670207df (image=quay.io/ceph/ceph:v19, name=nervous_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 09:48:24 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:24 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 23 09:48:24 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 23 09:48:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 09:48:24 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:24 compute-0 nervous_hellman[76102]: Scheduled mon update...
Jan 23 09:48:24 compute-0 systemd[1]: libpod-21be9d1e422e0966e22cc568a1b40425b00ebdacd7de8c67a2067942670207df.scope: Deactivated successfully.
Jan 23 09:48:24 compute-0 podman[76061]: 2026-01-23 09:48:24.687271271 +0000 UTC m=+1.113550253 container died 21be9d1e422e0966e22cc568a1b40425b00ebdacd7de8c67a2067942670207df (image=quay.io/ceph/ceph:v19, name=nervous_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 09:48:24 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:24 compute-0 ceph-mon[74335]: Added host compute-0
Jan 23 09:48:24 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d728855e903d4fd86db85481561c3af2a6f51f290a852f27326b423010269c1-merged.mount: Deactivated successfully.
Jan 23 09:48:24 compute-0 podman[76137]: 2026-01-23 09:48:24.921083514 +0000 UTC m=+0.879508573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:24 compute-0 podman[76061]: 2026-01-23 09:48:24.947984481 +0000 UTC m=+1.374263453 container remove 21be9d1e422e0966e22cc568a1b40425b00ebdacd7de8c67a2067942670207df (image=quay.io/ceph/ceph:v19, name=nervous_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 09:48:24 compute-0 systemd[1]: libpod-conmon-21be9d1e422e0966e22cc568a1b40425b00ebdacd7de8c67a2067942670207df.scope: Deactivated successfully.
Jan 23 09:48:25 compute-0 podman[76173]: 2026-01-23 09:48:25.01627277 +0000 UTC m=+0.047152351 container create 4825a8ee7596bb0abd8e436d8b4cd8a3ea6cb2f078e89bfd2a867c506676d9e9 (image=quay.io/ceph/ceph:v19, name=sleepy_booth, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 09:48:25 compute-0 systemd[1]: Started libpod-conmon-4825a8ee7596bb0abd8e436d8b4cd8a3ea6cb2f078e89bfd2a867c506676d9e9.scope.
Jan 23 09:48:25 compute-0 podman[76192]: 2026-01-23 09:48:25.062281117 +0000 UTC m=+0.063043897 container create 71f8f9716b962a43850a438246a24a4a83314813412ca3ccaa8842458ecd6e4f (image=quay.io/ceph/ceph:v19, name=tender_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 23 09:48:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6fbab537edd560ae725a5feda1b3aa4f8f0f832c8dc37539d945bc1ea65627c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6fbab537edd560ae725a5feda1b3aa4f8f0f832c8dc37539d945bc1ea65627c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6fbab537edd560ae725a5feda1b3aa4f8f0f832c8dc37539d945bc1ea65627c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:25 compute-0 podman[76173]: 2026-01-23 09:48:24.992282048 +0000 UTC m=+0.023161649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:25 compute-0 systemd[1]: Started libpod-conmon-71f8f9716b962a43850a438246a24a4a83314813412ca3ccaa8842458ecd6e4f.scope.
Jan 23 09:48:25 compute-0 podman[76173]: 2026-01-23 09:48:25.091303946 +0000 UTC m=+0.122183557 container init 4825a8ee7596bb0abd8e436d8b4cd8a3ea6cb2f078e89bfd2a867c506676d9e9 (image=quay.io/ceph/ceph:v19, name=sleepy_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 09:48:25 compute-0 podman[76173]: 2026-01-23 09:48:25.096656933 +0000 UTC m=+0.127536534 container start 4825a8ee7596bb0abd8e436d8b4cd8a3ea6cb2f078e89bfd2a867c506676d9e9 (image=quay.io/ceph/ceph:v19, name=sleepy_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:25 compute-0 podman[76173]: 2026-01-23 09:48:25.112762394 +0000 UTC m=+0.143642005 container attach 4825a8ee7596bb0abd8e436d8b4cd8a3ea6cb2f078e89bfd2a867c506676d9e9 (image=quay.io/ceph/ceph:v19, name=sleepy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Jan 23 09:48:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:25 compute-0 podman[76192]: 2026-01-23 09:48:25.132531943 +0000 UTC m=+0.133294743 container init 71f8f9716b962a43850a438246a24a4a83314813412ca3ccaa8842458ecd6e4f (image=quay.io/ceph/ceph:v19, name=tender_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:25 compute-0 podman[76192]: 2026-01-23 09:48:25.039070637 +0000 UTC m=+0.039833447 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:25 compute-0 podman[76192]: 2026-01-23 09:48:25.138094686 +0000 UTC m=+0.138857466 container start 71f8f9716b962a43850a438246a24a4a83314813412ca3ccaa8842458ecd6e4f (image=quay.io/ceph/ceph:v19, name=tender_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:25 compute-0 podman[76192]: 2026-01-23 09:48:25.143062031 +0000 UTC m=+0.143824811 container attach 71f8f9716b962a43850a438246a24a4a83314813412ca3ccaa8842458ecd6e4f (image=quay.io/ceph/ceph:v19, name=tender_lamport, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:48:25 compute-0 tender_lamport[76215]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 23 09:48:25 compute-0 systemd[1]: libpod-71f8f9716b962a43850a438246a24a4a83314813412ca3ccaa8842458ecd6e4f.scope: Deactivated successfully.
Jan 23 09:48:25 compute-0 podman[76192]: 2026-01-23 09:48:25.237631549 +0000 UTC m=+0.238394329 container died 71f8f9716b962a43850a438246a24a4a83314813412ca3ccaa8842458ecd6e4f (image=quay.io/ceph/ceph:v19, name=tender_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d54267375bd96e6efa27b683618d006f17dca22c959b0d78d3f3f90bf0c91f1-merged.mount: Deactivated successfully.
Jan 23 09:48:25 compute-0 podman[76192]: 2026-01-23 09:48:25.288137597 +0000 UTC m=+0.288900377 container remove 71f8f9716b962a43850a438246a24a4a83314813412ca3ccaa8842458ecd6e4f (image=quay.io/ceph/ceph:v19, name=tender_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:48:25 compute-0 systemd[1]: libpod-conmon-71f8f9716b962a43850a438246a24a4a83314813412ca3ccaa8842458ecd6e4f.scope: Deactivated successfully.
Jan 23 09:48:25 compute-0 sudo[76060]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 23 09:48:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:25 compute-0 sudo[76254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:25 compute-0 sudo[76254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:25 compute-0 sudo[76254]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:25 compute-0 sudo[76279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 23 09:48:25 compute-0 sudo[76279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:25 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:25 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 23 09:48:25 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 23 09:48:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 09:48:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:25 compute-0 sleepy_booth[76210]: Scheduled mgr update...
Jan 23 09:48:25 compute-0 systemd[1]: libpod-4825a8ee7596bb0abd8e436d8b4cd8a3ea6cb2f078e89bfd2a867c506676d9e9.scope: Deactivated successfully.
Jan 23 09:48:25 compute-0 podman[76173]: 2026-01-23 09:48:25.539918257 +0000 UTC m=+0.570797848 container died 4825a8ee7596bb0abd8e436d8b4cd8a3ea6cb2f078e89bfd2a867c506676d9e9 (image=quay.io/ceph/ceph:v19, name=sleepy_booth, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6fbab537edd560ae725a5feda1b3aa4f8f0f832c8dc37539d945bc1ea65627c-merged.mount: Deactivated successfully.
Jan 23 09:48:25 compute-0 systemd[1]: libpod-conmon-4825a8ee7596bb0abd8e436d8b4cd8a3ea6cb2f078e89bfd2a867c506676d9e9.scope: Deactivated successfully.
Jan 23 09:48:25 compute-0 podman[76173]: 2026-01-23 09:48:25.586119769 +0000 UTC m=+0.616999350 container remove 4825a8ee7596bb0abd8e436d8b4cd8a3ea6cb2f078e89bfd2a867c506676d9e9 (image=quay.io/ceph/ceph:v19, name=sleepy_booth, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 09:48:25 compute-0 podman[76316]: 2026-01-23 09:48:25.659421864 +0000 UTC m=+0.042596847 container create b1c884e4af427c89d7023e52fb11368c3ae400c755e3090504bdaec086f54bd1 (image=quay.io/ceph/ceph:v19, name=sad_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:48:25 compute-0 systemd[1]: Started libpod-conmon-b1c884e4af427c89d7023e52fb11368c3ae400c755e3090504bdaec086f54bd1.scope.
Jan 23 09:48:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6665ee972c8a786de855a01cd940440ba036894fdba8b0cd9cdbf3c48f14c05/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6665ee972c8a786de855a01cd940440ba036894fdba8b0cd9cdbf3c48f14c05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6665ee972c8a786de855a01cd940440ba036894fdba8b0cd9cdbf3c48f14c05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:25 compute-0 podman[76316]: 2026-01-23 09:48:25.735603524 +0000 UTC m=+0.118778527 container init b1c884e4af427c89d7023e52fb11368c3ae400c755e3090504bdaec086f54bd1 (image=quay.io/ceph/ceph:v19, name=sad_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 09:48:25 compute-0 podman[76316]: 2026-01-23 09:48:25.641074847 +0000 UTC m=+0.024249850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:25 compute-0 podman[76316]: 2026-01-23 09:48:25.742098414 +0000 UTC m=+0.125273397 container start b1c884e4af427c89d7023e52fb11368c3ae400c755e3090504bdaec086f54bd1 (image=quay.io/ceph/ceph:v19, name=sad_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 09:48:25 compute-0 podman[76316]: 2026-01-23 09:48:25.748257914 +0000 UTC m=+0.131432907 container attach b1c884e4af427c89d7023e52fb11368c3ae400c755e3090504bdaec086f54bd1 (image=quay.io/ceph/ceph:v19, name=sad_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:25 compute-0 ceph-mon[74335]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:25 compute-0 ceph-mon[74335]: Saving service mon spec with placement count:5
Jan 23 09:48:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:25 compute-0 ceph-mon[74335]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:25 compute-0 ceph-mon[74335]: Saving service mgr spec with placement count:2
Jan 23 09:48:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:25 compute-0 sudo[76279]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:25 compute-0 sudo[76368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:25 compute-0 sudo[76368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:25 compute-0 sudo[76368]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:25 compute-0 sudo[76402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 09:48:25 compute-0 sudo[76402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:26 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:26 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service crash spec with placement *
Jan 23 09:48:26 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 23 09:48:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 23 09:48:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:26 compute-0 sad_brattain[76335]: Scheduled crash update...
Jan 23 09:48:26 compute-0 systemd[1]: libpod-b1c884e4af427c89d7023e52fb11368c3ae400c755e3090504bdaec086f54bd1.scope: Deactivated successfully.
Jan 23 09:48:26 compute-0 podman[76316]: 2026-01-23 09:48:26.16324001 +0000 UTC m=+0.546415013 container died b1c884e4af427c89d7023e52fb11368c3ae400c755e3090504bdaec086f54bd1 (image=quay.io/ceph/ceph:v19, name=sad_brattain, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6665ee972c8a786de855a01cd940440ba036894fdba8b0cd9cdbf3c48f14c05-merged.mount: Deactivated successfully.
Jan 23 09:48:26 compute-0 podman[76316]: 2026-01-23 09:48:26.208950918 +0000 UTC m=+0.592125901 container remove b1c884e4af427c89d7023e52fb11368c3ae400c755e3090504bdaec086f54bd1 (image=quay.io/ceph/ceph:v19, name=sad_brattain, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:26 compute-0 systemd[1]: libpod-conmon-b1c884e4af427c89d7023e52fb11368c3ae400c755e3090504bdaec086f54bd1.scope: Deactivated successfully.
Jan 23 09:48:26 compute-0 podman[76452]: 2026-01-23 09:48:26.27053867 +0000 UTC m=+0.039714523 container create c5141744afde98f717042e753c6ccd98e011052dcf8d15de8c50004fd3fdad8a (image=quay.io/ceph/ceph:v19, name=youthful_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:48:26 compute-0 systemd[1]: Started libpod-conmon-c5141744afde98f717042e753c6ccd98e011052dcf8d15de8c50004fd3fdad8a.scope.
Jan 23 09:48:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da350cea6918bc87a7f5b21ccd2b7401baa49439b85d3c0c4d0a1d3c6dd217c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da350cea6918bc87a7f5b21ccd2b7401baa49439b85d3c0c4d0a1d3c6dd217c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da350cea6918bc87a7f5b21ccd2b7401baa49439b85d3c0c4d0a1d3c6dd217c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:26 compute-0 podman[76452]: 2026-01-23 09:48:26.252811172 +0000 UTC m=+0.021987045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:26 compute-0 podman[76452]: 2026-01-23 09:48:26.349754698 +0000 UTC m=+0.118930571 container init c5141744afde98f717042e753c6ccd98e011052dcf8d15de8c50004fd3fdad8a (image=quay.io/ceph/ceph:v19, name=youthful_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:26 compute-0 podman[76452]: 2026-01-23 09:48:26.355709003 +0000 UTC m=+0.124884846 container start c5141744afde98f717042e753c6ccd98e011052dcf8d15de8c50004fd3fdad8a (image=quay.io/ceph/ceph:v19, name=youthful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 09:48:26 compute-0 podman[76452]: 2026-01-23 09:48:26.359634898 +0000 UTC m=+0.128810771 container attach c5141744afde98f717042e753c6ccd98e011052dcf8d15de8c50004fd3fdad8a (image=quay.io/ceph/ceph:v19, name=youthful_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:26 compute-0 ceph-mgr[74633]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 09:48:26 compute-0 podman[76549]: 2026-01-23 09:48:26.651580342 +0000 UTC m=+0.119647413 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 09:48:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 23 09:48:26 compute-0 podman[76549]: 2026-01-23 09:48:26.99896846 +0000 UTC m=+0.467035521 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:48:26 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:26 compute-0 ceph-mon[74335]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:26 compute-0 ceph-mon[74335]: Saving service crash spec with placement *
Jan 23 09:48:26 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2852970831' entity='client.admin' 
Jan 23 09:48:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:27 compute-0 systemd[1]: libpod-c5141744afde98f717042e753c6ccd98e011052dcf8d15de8c50004fd3fdad8a.scope: Deactivated successfully.
Jan 23 09:48:27 compute-0 podman[76452]: 2026-01-23 09:48:27.037775346 +0000 UTC m=+0.806951199 container died c5141744afde98f717042e753c6ccd98e011052dcf8d15de8c50004fd3fdad8a (image=quay.io/ceph/ceph:v19, name=youthful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 09:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-da350cea6918bc87a7f5b21ccd2b7401baa49439b85d3c0c4d0a1d3c6dd217c2-merged.mount: Deactivated successfully.
Jan 23 09:48:27 compute-0 sudo[76402]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:27 compute-0 podman[76452]: 2026-01-23 09:48:27.218529296 +0000 UTC m=+0.987705149 container remove c5141744afde98f717042e753c6ccd98e011052dcf8d15de8c50004fd3fdad8a (image=quay.io/ceph/ceph:v19, name=youthful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:48:27 compute-0 systemd[1]: libpod-conmon-c5141744afde98f717042e753c6ccd98e011052dcf8d15de8c50004fd3fdad8a.scope: Deactivated successfully.
Jan 23 09:48:27 compute-0 sudo[76608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:27 compute-0 sudo[76608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:27 compute-0 sudo[76608]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:27 compute-0 sudo[76642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 09:48:27 compute-0 sudo[76642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:27 compute-0 podman[76611]: 2026-01-23 09:48:27.397283548 +0000 UTC m=+0.028422883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:27 compute-0 podman[76611]: 2026-01-23 09:48:27.628786904 +0000 UTC m=+0.259926209 container create 86636013d4bf94b917763a4482fcf2ec49f7e23dae6cc17e2bd0f440ee225538 (image=quay.io/ceph/ceph:v19, name=charming_gould, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:48:27 compute-0 systemd[1]: Started libpod-conmon-86636013d4bf94b917763a4482fcf2ec49f7e23dae6cc17e2bd0f440ee225538.scope.
Jan 23 09:48:27 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76688 (sysctl)
Jan 23 09:48:27 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 23 09:48:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07f0d334f0c10566c8781d3c15efc220c7fc27827a5d08a36c667c4e13ebb05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07f0d334f0c10566c8781d3c15efc220c7fc27827a5d08a36c667c4e13ebb05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07f0d334f0c10566c8781d3c15efc220c7fc27827a5d08a36c667c4e13ebb05/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:27 compute-0 podman[76611]: 2026-01-23 09:48:27.92250222 +0000 UTC m=+0.553641565 container init 86636013d4bf94b917763a4482fcf2ec49f7e23dae6cc17e2bd0f440ee225538 (image=quay.io/ceph/ceph:v19, name=charming_gould, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 09:48:27 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 23 09:48:27 compute-0 podman[76611]: 2026-01-23 09:48:27.930494874 +0000 UTC m=+0.561634189 container start 86636013d4bf94b917763a4482fcf2ec49f7e23dae6cc17e2bd0f440ee225538 (image=quay.io/ceph/ceph:v19, name=charming_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 09:48:27 compute-0 podman[76611]: 2026-01-23 09:48:27.967681043 +0000 UTC m=+0.598820378 container attach 86636013d4bf94b917763a4482fcf2ec49f7e23dae6cc17e2bd0f440ee225538 (image=quay.io/ceph/ceph:v19, name=charming_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2852970831' entity='client.admin' 
Jan 23 09:48:28 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:28 compute-0 sudo[76642]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:28 compute-0 sudo[76725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:28 compute-0 sudo[76725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:28 compute-0 sudo[76725]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:28 compute-0 sudo[76759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 23 09:48:28 compute-0 sudo[76759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:28 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 23 09:48:28 compute-0 ceph-mgr[74633]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 23 09:48:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:28 compute-0 sudo[76759]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:28 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 23 09:48:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:28 compute-0 systemd[1]: libpod-86636013d4bf94b917763a4482fcf2ec49f7e23dae6cc17e2bd0f440ee225538.scope: Deactivated successfully.
Jan 23 09:48:28 compute-0 podman[76611]: 2026-01-23 09:48:28.789758244 +0000 UTC m=+1.420897549 container died 86636013d4bf94b917763a4482fcf2ec49f7e23dae6cc17e2bd0f440ee225538 (image=quay.io/ceph/ceph:v19, name=charming_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:28 compute-0 sudo[76815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:28 compute-0 sudo[76815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:28 compute-0 sudo[76815]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:29 compute-0 sudo[76841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- inventory --format=json-pretty --filter-for-batch
Jan 23 09:48:29 compute-0 sudo[76841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d07f0d334f0c10566c8781d3c15efc220c7fc27827a5d08a36c667c4e13ebb05-merged.mount: Deactivated successfully.
Jan 23 09:48:29 compute-0 ceph-mon[74335]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:29 compute-0 ceph-mon[74335]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:29 compute-0 ceph-mon[74335]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 23 09:48:29 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:29 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:29 compute-0 podman[76611]: 2026-01-23 09:48:29.253936619 +0000 UTC m=+1.885075924 container remove 86636013d4bf94b917763a4482fcf2ec49f7e23dae6cc17e2bd0f440ee225538 (image=quay.io/ceph/ceph:v19, name=charming_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 09:48:29 compute-0 podman[76868]: 2026-01-23 09:48:29.345282643 +0000 UTC m=+0.069788414 container create 469fea528e3e10a66d88167922b4616fccf8c7a09100223266f765f07580472d (image=quay.io/ceph/ceph:v19, name=peaceful_bardeen, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:48:29 compute-0 systemd[1]: Started libpod-conmon-469fea528e3e10a66d88167922b4616fccf8c7a09100223266f765f07580472d.scope.
Jan 23 09:48:29 compute-0 podman[76868]: 2026-01-23 09:48:29.298486053 +0000 UTC m=+0.022991844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4321831f03304b328add06e4aac691e1176d768955aaffeebf8a2f2d9824e1bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4321831f03304b328add06e4aac691e1176d768955aaffeebf8a2f2d9824e1bf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4321831f03304b328add06e4aac691e1176d768955aaffeebf8a2f2d9824e1bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:29 compute-0 podman[76868]: 2026-01-23 09:48:29.455729466 +0000 UTC m=+0.180235257 container init 469fea528e3e10a66d88167922b4616fccf8c7a09100223266f765f07580472d (image=quay.io/ceph/ceph:v19, name=peaceful_bardeen, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:29 compute-0 podman[76868]: 2026-01-23 09:48:29.464149232 +0000 UTC m=+0.188655003 container start 469fea528e3e10a66d88167922b4616fccf8c7a09100223266f765f07580472d (image=quay.io/ceph/ceph:v19, name=peaceful_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:48:29 compute-0 podman[76868]: 2026-01-23 09:48:29.476972037 +0000 UTC m=+0.201477808 container attach 469fea528e3e10a66d88167922b4616fccf8c7a09100223266f765f07580472d (image=quay.io/ceph/ceph:v19, name=peaceful_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 23 09:48:29 compute-0 systemd[1]: libpod-conmon-86636013d4bf94b917763a4482fcf2ec49f7e23dae6cc17e2bd0f440ee225538.scope: Deactivated successfully.
Jan 23 09:48:29 compute-0 podman[76925]: 2026-01-23 09:48:29.614608576 +0000 UTC m=+0.061833481 container create 691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chebyshev, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:48:29 compute-0 systemd[1]: Started libpod-conmon-691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f.scope.
Jan 23 09:48:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:29 compute-0 podman[76925]: 2026-01-23 09:48:29.584203906 +0000 UTC m=+0.031428831 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:48:29 compute-0 podman[76925]: 2026-01-23 09:48:29.686556002 +0000 UTC m=+0.133780917 container init 691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 09:48:29 compute-0 podman[76925]: 2026-01-23 09:48:29.691720863 +0000 UTC m=+0.138945768 container start 691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chebyshev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 23 09:48:29 compute-0 recursing_chebyshev[76960]: 167 167
Jan 23 09:48:29 compute-0 systemd[1]: libpod-691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f.scope: Deactivated successfully.
Jan 23 09:48:29 compute-0 conmon[76960]: conmon 691bd3730f7195670431 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f.scope/container/memory.events
Jan 23 09:48:29 compute-0 podman[76925]: 2026-01-23 09:48:29.695188224 +0000 UTC m=+0.142413339 container attach 691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 09:48:29 compute-0 podman[76925]: 2026-01-23 09:48:29.699414978 +0000 UTC m=+0.146639893 container died 691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-89c4ffb1772fdec2a03d0be426fc11562557a17ffbc5c1809594d22071101159-merged.mount: Deactivated successfully.
Jan 23 09:48:29 compute-0 podman[76925]: 2026-01-23 09:48:29.737259046 +0000 UTC m=+0.184483951 container remove 691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chebyshev, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:48:29 compute-0 systemd[1]: libpod-conmon-691bd3730f71956704319cc0ad2becd823d561f28296b12bf4010e6092b48f4f.scope: Deactivated successfully.
Jan 23 09:48:29 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 09:48:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:29 compute-0 ceph-mgr[74633]: [cephadm INFO root] Added label _admin to host compute-0
Jan 23 09:48:29 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 23 09:48:29 compute-0 peaceful_bardeen[76907]: Added label _admin to host compute-0
Jan 23 09:48:29 compute-0 systemd[1]: libpod-469fea528e3e10a66d88167922b4616fccf8c7a09100223266f765f07580472d.scope: Deactivated successfully.
Jan 23 09:48:29 compute-0 podman[76868]: 2026-01-23 09:48:29.903551082 +0000 UTC m=+0.628056853 container died 469fea528e3e10a66d88167922b4616fccf8c7a09100223266f765f07580472d (image=quay.io/ceph/ceph:v19, name=peaceful_bardeen, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 09:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4321831f03304b328add06e4aac691e1176d768955aaffeebf8a2f2d9824e1bf-merged.mount: Deactivated successfully.
Jan 23 09:48:29 compute-0 podman[76868]: 2026-01-23 09:48:29.997542173 +0000 UTC m=+0.722047944 container remove 469fea528e3e10a66d88167922b4616fccf8c7a09100223266f765f07580472d (image=quay.io/ceph/ceph:v19, name=peaceful_bardeen, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:48:30 compute-0 systemd[1]: libpod-conmon-469fea528e3e10a66d88167922b4616fccf8c7a09100223266f765f07580472d.scope: Deactivated successfully.
Jan 23 09:48:30 compute-0 podman[76991]: 2026-01-23 09:48:30.059598689 +0000 UTC m=+0.040429934 container create d5601360b85709ca91d0fcfd9607152792f6a64b626c3a7bafeb5cbf1272f08c (image=quay.io/ceph/ceph:v19, name=vigilant_engelbart, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 09:48:30 compute-0 systemd[1]: Started libpod-conmon-d5601360b85709ca91d0fcfd9607152792f6a64b626c3a7bafeb5cbf1272f08c.scope.
Jan 23 09:48:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17f0689e4ddbfea5d280ae7713d8706ca8906808f0d353866a104ef79d6e246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17f0689e4ddbfea5d280ae7713d8706ca8906808f0d353866a104ef79d6e246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17f0689e4ddbfea5d280ae7713d8706ca8906808f0d353866a104ef79d6e246/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:30 compute-0 podman[76991]: 2026-01-23 09:48:30.12661498 +0000 UTC m=+0.107446255 container init d5601360b85709ca91d0fcfd9607152792f6a64b626c3a7bafeb5cbf1272f08c (image=quay.io/ceph/ceph:v19, name=vigilant_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:30 compute-0 podman[76991]: 2026-01-23 09:48:30.131601816 +0000 UTC m=+0.112433061 container start d5601360b85709ca91d0fcfd9607152792f6a64b626c3a7bafeb5cbf1272f08c (image=quay.io/ceph/ceph:v19, name=vigilant_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 09:48:30 compute-0 podman[76991]: 2026-01-23 09:48:30.134382448 +0000 UTC m=+0.115213713 container attach d5601360b85709ca91d0fcfd9607152792f6a64b626c3a7bafeb5cbf1272f08c (image=quay.io/ceph/ceph:v19, name=vigilant_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:30 compute-0 podman[76991]: 2026-01-23 09:48:30.043265731 +0000 UTC m=+0.024096996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 23 09:48:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/93733388' entity='client.admin' 
Jan 23 09:48:30 compute-0 vigilant_engelbart[77007]: set mgr/dashboard/cluster/status
Jan 23 09:48:30 compute-0 systemd[1]: libpod-d5601360b85709ca91d0fcfd9607152792f6a64b626c3a7bafeb5cbf1272f08c.scope: Deactivated successfully.
Jan 23 09:48:30 compute-0 podman[76991]: 2026-01-23 09:48:30.660385243 +0000 UTC m=+0.641216488 container died d5601360b85709ca91d0fcfd9607152792f6a64b626c3a7bafeb5cbf1272f08c (image=quay.io/ceph/ceph:v19, name=vigilant_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 09:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17f0689e4ddbfea5d280ae7713d8706ca8906808f0d353866a104ef79d6e246-merged.mount: Deactivated successfully.
Jan 23 09:48:30 compute-0 podman[76991]: 2026-01-23 09:48:30.694455571 +0000 UTC m=+0.675286826 container remove d5601360b85709ca91d0fcfd9607152792f6a64b626c3a7bafeb5cbf1272f08c (image=quay.io/ceph/ceph:v19, name=vigilant_engelbart, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 09:48:30 compute-0 systemd[1]: libpod-conmon-d5601360b85709ca91d0fcfd9607152792f6a64b626c3a7bafeb5cbf1272f08c.scope: Deactivated successfully.
Jan 23 09:48:30 compute-0 sudo[73292]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:30 compute-0 podman[77053]: 2026-01-23 09:48:30.839106794 +0000 UTC m=+0.037125297 container create 6e60428e83a9e92519b4d4cce3af64339da12a3f6cff13368bd26f030e7911e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:48:30 compute-0 systemd[1]: Started libpod-conmon-6e60428e83a9e92519b4d4cce3af64339da12a3f6cff13368bd26f030e7911e7.scope.
Jan 23 09:48:30 compute-0 ceph-mon[74335]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:30 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:30 compute-0 ceph-mon[74335]: Added label _admin to host compute-0
Jan 23 09:48:30 compute-0 ceph-mon[74335]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:30 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/93733388' entity='client.admin' 
Jan 23 09:48:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec08e5a0512539c810ccc01221ca98b33a4d9d2c408b511de65e8909dbbde29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec08e5a0512539c810ccc01221ca98b33a4d9d2c408b511de65e8909dbbde29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec08e5a0512539c810ccc01221ca98b33a4d9d2c408b511de65e8909dbbde29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec08e5a0512539c810ccc01221ca98b33a4d9d2c408b511de65e8909dbbde29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:30 compute-0 podman[77053]: 2026-01-23 09:48:30.910827593 +0000 UTC m=+0.108846106 container init 6e60428e83a9e92519b4d4cce3af64339da12a3f6cff13368bd26f030e7911e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Jan 23 09:48:30 compute-0 podman[77053]: 2026-01-23 09:48:30.823994662 +0000 UTC m=+0.022013175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:48:30 compute-0 podman[77053]: 2026-01-23 09:48:30.920531598 +0000 UTC m=+0.118550101 container start 6e60428e83a9e92519b4d4cce3af64339da12a3f6cff13368bd26f030e7911e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:48:30 compute-0 podman[77053]: 2026-01-23 09:48:30.923730741 +0000 UTC m=+0.121749244 container attach 6e60428e83a9e92519b4d4cce3af64339da12a3f6cff13368bd26f030e7911e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 09:48:31 compute-0 sudo[77099]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebdieoqyuqumlkgdwmocoxpxowqsynmq ; /usr/bin/python3'
Jan 23 09:48:31 compute-0 sudo[77099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:31 compute-0 python3[77104]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:48:31 compute-0 podman[77118]: 2026-01-23 09:48:31.522162477 +0000 UTC m=+0.112010510 container create 4317e008dc077c171321e5236237bb7606e667fa43e6e9f9634b9ce952e2413d (image=quay.io/ceph/ceph:v19, name=angry_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:31 compute-0 podman[77118]: 2026-01-23 09:48:31.433583534 +0000 UTC m=+0.023431587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:31 compute-0 systemd[1]: Started libpod-conmon-4317e008dc077c171321e5236237bb7606e667fa43e6e9f9634b9ce952e2413d.scope.
Jan 23 09:48:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/479bf0875c226ddd9d5e7a8ab6a61e4f1fbb0a52d5a417150d5a6c3103e912d2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/479bf0875c226ddd9d5e7a8ab6a61e4f1fbb0a52d5a417150d5a6c3103e912d2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:31 compute-0 loving_benz[77069]: [
Jan 23 09:48:31 compute-0 loving_benz[77069]:     {
Jan 23 09:48:31 compute-0 loving_benz[77069]:         "available": false,
Jan 23 09:48:31 compute-0 loving_benz[77069]:         "being_replaced": false,
Jan 23 09:48:31 compute-0 loving_benz[77069]:         "ceph_device_lvm": false,
Jan 23 09:48:31 compute-0 loving_benz[77069]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 09:48:31 compute-0 loving_benz[77069]:         "lsm_data": {},
Jan 23 09:48:31 compute-0 loving_benz[77069]:         "lvs": [],
Jan 23 09:48:31 compute-0 loving_benz[77069]:         "path": "/dev/sr0",
Jan 23 09:48:31 compute-0 loving_benz[77069]:         "rejected_reasons": [
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "Insufficient space (<5GB)",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "Has a FileSystem"
Jan 23 09:48:31 compute-0 loving_benz[77069]:         ],
Jan 23 09:48:31 compute-0 loving_benz[77069]:         "sys_api": {
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "actuators": null,
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "device_nodes": [
Jan 23 09:48:31 compute-0 loving_benz[77069]:                 "sr0"
Jan 23 09:48:31 compute-0 loving_benz[77069]:             ],
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "devname": "sr0",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "human_readable_size": "482.00 KB",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "id_bus": "ata",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "model": "QEMU DVD-ROM",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "nr_requests": "2",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "parent": "/dev/sr0",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "partitions": {},
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "path": "/dev/sr0",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "removable": "1",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "rev": "2.5+",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "ro": "0",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "rotational": "1",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "sas_address": "",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "sas_device_handle": "",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "scheduler_mode": "mq-deadline",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "sectors": 0,
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "sectorsize": "2048",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "size": 493568.0,
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "support_discard": "2048",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "type": "disk",
Jan 23 09:48:31 compute-0 loving_benz[77069]:             "vendor": "QEMU"
Jan 23 09:48:31 compute-0 loving_benz[77069]:         }
Jan 23 09:48:31 compute-0 loving_benz[77069]:     }
Jan 23 09:48:31 compute-0 loving_benz[77069]: ]
Jan 23 09:48:31 compute-0 systemd[1]: libpod-6e60428e83a9e92519b4d4cce3af64339da12a3f6cff13368bd26f030e7911e7.scope: Deactivated successfully.
Jan 23 09:48:31 compute-0 podman[77118]: 2026-01-23 09:48:31.839579557 +0000 UTC m=+0.429427610 container init 4317e008dc077c171321e5236237bb7606e667fa43e6e9f9634b9ce952e2413d (image=quay.io/ceph/ceph:v19, name=angry_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:48:31 compute-0 podman[77118]: 2026-01-23 09:48:31.846552881 +0000 UTC m=+0.436400914 container start 4317e008dc077c171321e5236237bb7606e667fa43e6e9f9634b9ce952e2413d (image=quay.io/ceph/ceph:v19, name=angry_taussig, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:31 compute-0 podman[77118]: 2026-01-23 09:48:31.879279839 +0000 UTC m=+0.469127902 container attach 4317e008dc077c171321e5236237bb7606e667fa43e6e9f9634b9ce952e2413d (image=quay.io/ceph/ceph:v19, name=angry_taussig, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 09:48:31 compute-0 podman[77053]: 2026-01-23 09:48:31.927686036 +0000 UTC m=+1.125704529 container died 6e60428e83a9e92519b4d4cce3af64339da12a3f6cff13368bd26f030e7911e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 23 09:48:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 23 09:48:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1805324113' entity='client.admin' 
Jan 23 09:48:32 compute-0 systemd[1]: libpod-4317e008dc077c171321e5236237bb7606e667fa43e6e9f9634b9ce952e2413d.scope: Deactivated successfully.
Jan 23 09:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-dec08e5a0512539c810ccc01221ca98b33a4d9d2c408b511de65e8909dbbde29-merged.mount: Deactivated successfully.
Jan 23 09:48:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:32 compute-0 podman[77053]: 2026-01-23 09:48:32.802414118 +0000 UTC m=+2.000432621 container remove 6e60428e83a9e92519b4d4cce3af64339da12a3f6cff13368bd26f030e7911e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 09:48:32 compute-0 sudo[76841]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:48:32 compute-0 systemd[1]: libpod-conmon-6e60428e83a9e92519b4d4cce3af64339da12a3f6cff13368bd26f030e7911e7.scope: Deactivated successfully.
Jan 23 09:48:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:32 compute-0 podman[77118]: 2026-01-23 09:48:32.87114143 +0000 UTC m=+1.460989463 container died 4317e008dc077c171321e5236237bb7606e667fa43e6e9f9634b9ce952e2413d (image=quay.io/ceph/ceph:v19, name=angry_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 09:48:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:48:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-479bf0875c226ddd9d5e7a8ab6a61e4f1fbb0a52d5a417150d5a6c3103e912d2-merged.mount: Deactivated successfully.
Jan 23 09:48:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 09:48:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:48:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:48:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:48:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:32 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:48:32 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:48:32 compute-0 podman[78092]: 2026-01-23 09:48:32.911791769 +0000 UTC m=+0.560591868 container remove 4317e008dc077c171321e5236237bb7606e667fa43e6e9f9634b9ce952e2413d (image=quay.io/ceph/ceph:v19, name=angry_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:48:32 compute-0 systemd[1]: libpod-conmon-4317e008dc077c171321e5236237bb7606e667fa43e6e9f9634b9ce952e2413d.scope: Deactivated successfully.
Jan 23 09:48:32 compute-0 sudo[77099]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:32 compute-0 sudo[78107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 09:48:32 compute-0 sudo[78107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:32 compute-0 sudo[78107]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph
Jan 23 09:48:33 compute-0 sudo[78132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78132]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:48:33 compute-0 sudo[78157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78157]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:33 compute-0 sudo[78182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78182]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:48:33 compute-0 sudo[78207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78207]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:48:33 compute-0 sudo[78278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78278]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:48:33 compute-0 sudo[78332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78332]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 23 09:48:33 compute-0 sudo[78380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78380]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:48:33 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:48:33 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1805324113' entity='client.admin' 
Jan 23 09:48:33 compute-0 ceph-mon[74335]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:33 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:33 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:33 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:33 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:33 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:48:33 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:33 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:33 compute-0 sudo[78405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:48:33 compute-0 sudo[78405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78405]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:48:33 compute-0 sudo[78430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78430]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:48:33 compute-0 sudo[78455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78455]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:33 compute-0 sudo[78503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78503]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:48:33 compute-0 sudo[78553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkgvtiaknzfgbszlznxcafyhmvpruezc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769161713.2355678-37262-92010706902186/async_wrapper.py j471641949500 30 /home/zuul/.ansible/tmp/ansible-tmp-1769161713.2355678-37262-92010706902186/AnsiballZ_command.py _'
Jan 23 09:48:33 compute-0 sudo[78553]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:33 compute-0 sudo[78628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:48:33 compute-0 sudo[78628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78628]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 ansible-async_wrapper.py[78604]: Invoked with j471641949500 30 /home/zuul/.ansible/tmp/ansible-tmp-1769161713.2355678-37262-92010706902186/AnsiballZ_command.py _
Jan 23 09:48:33 compute-0 sudo[78653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:48:33 compute-0 sudo[78653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 ansible-async_wrapper.py[78680]: Starting module and watcher
Jan 23 09:48:33 compute-0 sudo[78653]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 ansible-async_wrapper.py[78680]: Start watching 78681 (30)
Jan 23 09:48:33 compute-0 ansible-async_wrapper.py[78681]: Start module (78681)
Jan 23 09:48:33 compute-0 ansible-async_wrapper.py[78604]: Return async_wrapper task started.
Jan 23 09:48:33 compute-0 sudo[78600]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 sudo[78683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:48:33 compute-0 sudo[78683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78683]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:33 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:48:33 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:48:33 compute-0 sudo[78708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 09:48:33 compute-0 sudo[78708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:33 compute-0 sudo[78708]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 python3[78682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:48:34 compute-0 sudo[78733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph
Jan 23 09:48:34 compute-0 sudo[78733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[78733]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 sudo[78764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:48:34 compute-0 sudo[78764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[78764]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 sudo[78796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:34 compute-0 sudo[78796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 podman[78756]: 2026-01-23 09:48:34.149258037 +0000 UTC m=+0.117363966 container create 99a427d0034379f688ae25f4103b8f5433f5a1d75057ce2806bb909a847cd1ed (image=quay.io/ceph/ceph:v19, name=interesting_ganguly, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:34 compute-0 sudo[78796]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 podman[78756]: 2026-01-23 09:48:34.061217001 +0000 UTC m=+0.029322990 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:34 compute-0 systemd[1]: Started libpod-conmon-99a427d0034379f688ae25f4103b8f5433f5a1d75057ce2806bb909a847cd1ed.scope.
Jan 23 09:48:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:34 compute-0 sudo[78821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:48:34 compute-0 sudo[78821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9fe34129b953040f2bac2159fdfdab818de258970039c61969bc34961e47f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9fe34129b953040f2bac2159fdfdab818de258970039c61969bc34961e47f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:34 compute-0 sudo[78821]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 podman[78756]: 2026-01-23 09:48:34.235869332 +0000 UTC m=+0.203975271 container init 99a427d0034379f688ae25f4103b8f5433f5a1d75057ce2806bb909a847cd1ed (image=quay.io/ceph/ceph:v19, name=interesting_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 09:48:34 compute-0 podman[78756]: 2026-01-23 09:48:34.242031123 +0000 UTC m=+0.210137052 container start 99a427d0034379f688ae25f4103b8f5433f5a1d75057ce2806bb909a847cd1ed (image=quay.io/ceph/ceph:v19, name=interesting_ganguly, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 09:48:34 compute-0 podman[78756]: 2026-01-23 09:48:34.245230626 +0000 UTC m=+0.213336565 container attach 99a427d0034379f688ae25f4103b8f5433f5a1d75057ce2806bb909a847cd1ed (image=quay.io/ceph/ceph:v19, name=interesting_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Jan 23 09:48:34 compute-0 sudo[78875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:48:34 compute-0 sudo[78875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[78875]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 sudo[78900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:48:34 compute-0 sudo[78900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[78900]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 sudo[78944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 23 09:48:34 compute-0 sudo[78944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[78944]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:48:34 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:48:34 compute-0 ceph-mon[74335]: Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:48:34 compute-0 ceph-mon[74335]: Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:48:34 compute-0 sudo[78969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:48:34 compute-0 sudo[78969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[78969]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:34 compute-0 sudo[78994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:48:34 compute-0 sudo[78994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[78994]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:48:34 compute-0 interesting_ganguly[78841]: 
Jan 23 09:48:34 compute-0 interesting_ganguly[78841]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 09:48:34 compute-0 systemd[1]: libpod-99a427d0034379f688ae25f4103b8f5433f5a1d75057ce2806bb909a847cd1ed.scope: Deactivated successfully.
Jan 23 09:48:34 compute-0 podman[78756]: 2026-01-23 09:48:34.636007414 +0000 UTC m=+0.604113363 container died 99a427d0034379f688ae25f4103b8f5433f5a1d75057ce2806bb909a847cd1ed (image=quay.io/ceph/ceph:v19, name=interesting_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:48:34 compute-0 sudo[79019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:48:34 compute-0 sudo[79019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[79019]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc9fe34129b953040f2bac2159fdfdab818de258970039c61969bc34961e47f8-merged.mount: Deactivated successfully.
Jan 23 09:48:34 compute-0 podman[78756]: 2026-01-23 09:48:34.670229846 +0000 UTC m=+0.638335775 container remove 99a427d0034379f688ae25f4103b8f5433f5a1d75057ce2806bb909a847cd1ed (image=quay.io/ceph/ceph:v19, name=interesting_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:34 compute-0 systemd[1]: libpod-conmon-99a427d0034379f688ae25f4103b8f5433f5a1d75057ce2806bb909a847cd1ed.scope: Deactivated successfully.
Jan 23 09:48:34 compute-0 ansible-async_wrapper.py[78681]: Module complete (78681)
Jan 23 09:48:34 compute-0 sudo[79053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:34 compute-0 sudo[79053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[79053]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 sudo[79083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:48:34 compute-0 sudo[79083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[79083]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 sudo[79131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:48:34 compute-0 sudo[79131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[79131]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:34 compute-0 sudo[79156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:48:34 compute-0 sudo[79156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:34 compute-0 sudo[79156]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:35 compute-0 sudo[79204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:48:35 compute-0 sudo[79204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:35 compute-0 sudo[79204]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:48:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:48:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:35 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 8de8f5e7-fdeb-4714-a6ce-b4050d242458 (Updating crash deployment (+1 -> 1))
Jan 23 09:48:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 23 09:48:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:48:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 09:48:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:48:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:35 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 23 09:48:35 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 23 09:48:35 compute-0 sudo[79229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:35 compute-0 sudo[79229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:35 compute-0 sudo[79229]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:35 compute-0 sudo[79255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:35 compute-0 sudo[79255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:35 compute-0 sudo[79300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frmsfxwjbjccakgbohqtlggbcyoebsbi ; /usr/bin/python3'
Jan 23 09:48:35 compute-0 sudo[79300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:35 compute-0 python3[79304]: ansible-ansible.legacy.async_status Invoked with jid=j471641949500.78604 mode=status _async_dir=/root/.ansible_async
Jan 23 09:48:35 compute-0 sudo[79300]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:35 compute-0 sudo[79367]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fykvkcbdyxpojzbofjckgrdpwqouzorv ; /usr/bin/python3'
Jan 23 09:48:35 compute-0 sudo[79367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:35 compute-0 ceph-mon[74335]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:48:35 compute-0 ceph-mon[74335]: Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:48:35 compute-0 ceph-mon[74335]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:35 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:35 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:35 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:35 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:48:35 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 09:48:35 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:35 compute-0 podman[79395]: 2026-01-23 09:48:35.620786457 +0000 UTC m=+0.059202464 container create e84a4ba0a4593260ffe6e82c2d827ed8d32d430330951b374a880dd6fbbe6333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_knuth, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:35 compute-0 python3[79374]: ansible-ansible.legacy.async_status Invoked with jid=j471641949500.78604 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 09:48:35 compute-0 sudo[79367]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:35 compute-0 podman[79395]: 2026-01-23 09:48:35.58603895 +0000 UTC m=+0.024454987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:48:35 compute-0 systemd[1]: Started libpod-conmon-e84a4ba0a4593260ffe6e82c2d827ed8d32d430330951b374a880dd6fbbe6333.scope.
Jan 23 09:48:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:35 compute-0 podman[79395]: 2026-01-23 09:48:35.730062756 +0000 UTC m=+0.168478763 container init e84a4ba0a4593260ffe6e82c2d827ed8d32d430330951b374a880dd6fbbe6333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_knuth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 09:48:35 compute-0 podman[79395]: 2026-01-23 09:48:35.73740404 +0000 UTC m=+0.175820047 container start e84a4ba0a4593260ffe6e82c2d827ed8d32d430330951b374a880dd6fbbe6333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_knuth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:35 compute-0 romantic_knuth[79411]: 167 167
Jan 23 09:48:35 compute-0 systemd[1]: libpod-e84a4ba0a4593260ffe6e82c2d827ed8d32d430330951b374a880dd6fbbe6333.scope: Deactivated successfully.
Jan 23 09:48:35 compute-0 podman[79395]: 2026-01-23 09:48:35.756393116 +0000 UTC m=+0.194809163 container attach e84a4ba0a4593260ffe6e82c2d827ed8d32d430330951b374a880dd6fbbe6333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_knuth, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 09:48:35 compute-0 podman[79395]: 2026-01-23 09:48:35.756982603 +0000 UTC m=+0.195398620 container died e84a4ba0a4593260ffe6e82c2d827ed8d32d430330951b374a880dd6fbbe6333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 09:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-82d54685a393278c76fd491f0b6c12492b3adc5242c717438342597066554189-merged.mount: Deactivated successfully.
Jan 23 09:48:35 compute-0 podman[79395]: 2026-01-23 09:48:35.794859752 +0000 UTC m=+0.233275759 container remove e84a4ba0a4593260ffe6e82c2d827ed8d32d430330951b374a880dd6fbbe6333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 09:48:35 compute-0 systemd[1]: libpod-conmon-e84a4ba0a4593260ffe6e82c2d827ed8d32d430330951b374a880dd6fbbe6333.scope: Deactivated successfully.
Jan 23 09:48:35 compute-0 systemd[1]: Reloading.
Jan 23 09:48:36 compute-0 systemd-rc-local-generator[79481]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:48:36 compute-0 systemd-sysv-generator[79484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:48:36 compute-0 sudo[79454]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjtdnfhjekatxjbpfkixcboboskqpfmk ; /usr/bin/python3'
Jan 23 09:48:36 compute-0 sudo[79454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:36 compute-0 systemd[1]: Reloading.
Jan 23 09:48:36 compute-0 systemd-sysv-generator[79524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:48:36 compute-0 systemd-rc-local-generator[79518]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:48:36 compute-0 python3[79491]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 09:48:36 compute-0 ceph-mon[74335]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:48:36 compute-0 ceph-mon[74335]: Deploying daemon crash.compute-0 on compute-0
Jan 23 09:48:36 compute-0 systemd[1]: Starting Ceph crash.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:48:36 compute-0 sudo[79454]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:36 compute-0 podman[79579]: 2026-01-23 09:48:36.732514526 +0000 UTC m=+0.050836709 container create ae2342c943dc0b4633eaeef8f7726de29e0287eb0b10e37d37a519b117896a21 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 23 09:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a527e429e3632ae9bb170e93202ca707bcd9bfdbac51cf50f42aad0f8ac0b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a527e429e3632ae9bb170e93202ca707bcd9bfdbac51cf50f42aad0f8ac0b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a527e429e3632ae9bb170e93202ca707bcd9bfdbac51cf50f42aad0f8ac0b4/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a527e429e3632ae9bb170e93202ca707bcd9bfdbac51cf50f42aad0f8ac0b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:36 compute-0 podman[79579]: 2026-01-23 09:48:36.789838374 +0000 UTC m=+0.108160597 container init ae2342c943dc0b4633eaeef8f7726de29e0287eb0b10e37d37a519b117896a21 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 09:48:36 compute-0 podman[79579]: 2026-01-23 09:48:36.796083407 +0000 UTC m=+0.114405590 container start ae2342c943dc0b4633eaeef8f7726de29e0287eb0b10e37d37a519b117896a21 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:36 compute-0 bash[79579]: ae2342c943dc0b4633eaeef8f7726de29e0287eb0b10e37d37a519b117896a21
Jan 23 09:48:36 compute-0 podman[79579]: 2026-01-23 09:48:36.704051293 +0000 UTC m=+0.022373506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:48:36 compute-0 systemd[1]: Started Ceph crash.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:48:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 23 09:48:36 compute-0 sudo[79255]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:48:36 compute-0 sudo[79624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdrluctgoogpnduenxfmtvnfrcsofvff ; /usr/bin/python3'
Jan 23 09:48:36 compute-0 sudo[79624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 23 09:48:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:36 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 8de8f5e7-fdeb-4714-a6ce-b4050d242458 (Updating crash deployment (+1 -> 1))
Jan 23 09:48:36 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 8de8f5e7-fdeb-4714-a6ce-b4050d242458 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 23 09:48:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 23 09:48:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 09:48:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 09:48:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: 2026-01-23T09:48:36.939+0000 7f21baf35640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 23 09:48:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: 2026-01-23T09:48:36.939+0000 7f21baf35640 -1 AuthRegistry(0x7f21b4069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 23 09:48:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: 2026-01-23T09:48:36.940+0000 7f21baf35640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 23 09:48:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: 2026-01-23T09:48:36.940+0000 7f21baf35640 -1 AuthRegistry(0x7f21baf33ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 23 09:48:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: 2026-01-23T09:48:36.941+0000 7f21b8caa640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 23 09:48:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: 2026-01-23T09:48:36.941+0000 7f21baf35640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 23 09:48:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 23 09:48:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 23 09:48:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:37 compute-0 sudo[79637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:48:37 compute-0 sudo[79637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:37 compute-0 python3[79626]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:48:37 compute-0 sudo[79637]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:37 compute-0 sudo[79663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:37 compute-0 sudo[79663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:37 compute-0 sudo[79663]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:37 compute-0 podman[79662]: 2026-01-23 09:48:37.096365634 +0000 UTC m=+0.066401414 container create 750ff708d6e9532a6b650bf15c077f07a54ddb5b8a53132e0338f09575e75c37 (image=quay.io/ceph/ceph:v19, name=compassionate_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:48:37 compute-0 systemd[1]: Started libpod-conmon-750ff708d6e9532a6b650bf15c077f07a54ddb5b8a53132e0338f09575e75c37.scope.
Jan 23 09:48:37 compute-0 sudo[79699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 09:48:37 compute-0 sudo[79699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:37 compute-0 podman[79662]: 2026-01-23 09:48:37.058494236 +0000 UTC m=+0.028530046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfe5ea36658e98ed30ff07969ad77a12746544e44c32040ed2244853417e9c1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfe5ea36658e98ed30ff07969ad77a12746544e44c32040ed2244853417e9c1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfe5ea36658e98ed30ff07969ad77a12746544e44c32040ed2244853417e9c1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:37 compute-0 podman[79662]: 2026-01-23 09:48:37.231195731 +0000 UTC m=+0.201231531 container init 750ff708d6e9532a6b650bf15c077f07a54ddb5b8a53132e0338f09575e75c37 (image=quay.io/ceph/ceph:v19, name=compassionate_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 09:48:37 compute-0 podman[79662]: 2026-01-23 09:48:37.238436863 +0000 UTC m=+0.208472643 container start 750ff708d6e9532a6b650bf15c077f07a54ddb5b8a53132e0338f09575e75c37 (image=quay.io/ceph/ceph:v19, name=compassionate_bassi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:48:37 compute-0 podman[79662]: 2026-01-23 09:48:37.332046842 +0000 UTC m=+0.302082652 container attach 750ff708d6e9532a6b650bf15c077f07a54ddb5b8a53132e0338f09575e75c37 (image=quay.io/ceph/ceph:v19, name=compassionate_bassi, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 09:48:37 compute-0 ceph-mon[74335]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:37 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:37 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:37 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:37 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:37 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:37 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:37 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:48:37 compute-0 compassionate_bassi[79725]: 
Jan 23 09:48:37 compute-0 compassionate_bassi[79725]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 09:48:37 compute-0 systemd[1]: libpod-750ff708d6e9532a6b650bf15c077f07a54ddb5b8a53132e0338f09575e75c37.scope: Deactivated successfully.
Jan 23 09:48:37 compute-0 podman[79662]: 2026-01-23 09:48:37.671309572 +0000 UTC m=+0.641345352 container died 750ff708d6e9532a6b650bf15c077f07a54ddb5b8a53132e0338f09575e75c37 (image=quay.io/ceph/ceph:v19, name=compassionate_bassi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 09:48:37 compute-0 podman[79821]: 2026-01-23 09:48:37.682604993 +0000 UTC m=+0.095140926 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:48:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bfe5ea36658e98ed30ff07969ad77a12746544e44c32040ed2244853417e9c1-merged.mount: Deactivated successfully.
Jan 23 09:48:37 compute-0 podman[79662]: 2026-01-23 09:48:37.851824946 +0000 UTC m=+0.821860726 container remove 750ff708d6e9532a6b650bf15c077f07a54ddb5b8a53132e0338f09575e75c37 (image=quay.io/ceph/ceph:v19, name=compassionate_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 09:48:37 compute-0 systemd[1]: libpod-conmon-750ff708d6e9532a6b650bf15c077f07a54ddb5b8a53132e0338f09575e75c37.scope: Deactivated successfully.
Jan 23 09:48:37 compute-0 sudo[79624]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:37 compute-0 podman[79855]: 2026-01-23 09:48:37.908775663 +0000 UTC m=+0.115391209 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 09:48:37 compute-0 podman[79821]: 2026-01-23 09:48:37.913697507 +0000 UTC m=+0.326233460 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:38 compute-0 sudo[79699]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:38 compute-0 sudo[79930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spvyqrfhxpmubzduuaodxcszplvrgbtu ; /usr/bin/python3'
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:48:38 compute-0 sudo[79930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 sudo[79933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:48:38 compute-0 sudo[79933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:38 compute-0 sudo[79933]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 09:48:38 compute-0 python3[79932]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:48:38 compute-0 sudo[79958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:38 compute-0 sudo[79958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:38 compute-0 sudo[79958]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:38 compute-0 podman[79981]: 2026-01-23 09:48:38.352773468 +0000 UTC m=+0.041189197 container create 8c847025fe1ee1d9b813a9b8ea0eec8888e1790c372d571a68019761dbc9b2e1 (image=quay.io/ceph/ceph:v19, name=elastic_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 09:48:38 compute-0 sudo[79989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:38 compute-0 sudo[79989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:38 compute-0 systemd[1]: Started libpod-conmon-8c847025fe1ee1d9b813a9b8ea0eec8888e1790c372d571a68019761dbc9b2e1.scope.
Jan 23 09:48:38 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055b323dd4532d067259b5804011b3b7eb0a5149710d9325c397ef8f807615e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055b323dd4532d067259b5804011b3b7eb0a5149710d9325c397ef8f807615e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055b323dd4532d067259b5804011b3b7eb0a5149710d9325c397ef8f807615e1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:38 compute-0 podman[79981]: 2026-01-23 09:48:38.334639107 +0000 UTC m=+0.023054856 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:38 compute-0 podman[79981]: 2026-01-23 09:48:38.437792826 +0000 UTC m=+0.126208565 container init 8c847025fe1ee1d9b813a9b8ea0eec8888e1790c372d571a68019761dbc9b2e1 (image=quay.io/ceph/ceph:v19, name=elastic_euler, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 09:48:38 compute-0 podman[79981]: 2026-01-23 09:48:38.447171101 +0000 UTC m=+0.135586820 container start 8c847025fe1ee1d9b813a9b8ea0eec8888e1790c372d571a68019761dbc9b2e1 (image=quay.io/ceph/ceph:v19, name=elastic_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:48:38 compute-0 podman[79981]: 2026-01-23 09:48:38.450463107 +0000 UTC m=+0.138878836 container attach 8c847025fe1ee1d9b813a9b8ea0eec8888e1790c372d571a68019761dbc9b2e1 (image=quay.io/ceph/ceph:v19, name=elastic_euler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:38 compute-0 podman[80062]: 2026-01-23 09:48:38.67331539 +0000 UTC m=+0.040331962 container create 0ee09c62fa7ac0c3b0c379604134e2809e84c3bc8cb35b743fd71ab2ff465a38 (image=quay.io/ceph/ceph:v19, name=awesome_morse, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:38 compute-0 systemd[1]: Started libpod-conmon-0ee09c62fa7ac0c3b0c379604134e2809e84c3bc8cb35b743fd71ab2ff465a38.scope.
Jan 23 09:48:38 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:38 compute-0 podman[80062]: 2026-01-23 09:48:38.746517982 +0000 UTC m=+0.113534574 container init 0ee09c62fa7ac0c3b0c379604134e2809e84c3bc8cb35b743fd71ab2ff465a38 (image=quay.io/ceph/ceph:v19, name=awesome_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 09:48:38 compute-0 podman[80062]: 2026-01-23 09:48:38.654764117 +0000 UTC m=+0.021780719 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:38 compute-0 podman[80062]: 2026-01-23 09:48:38.752536878 +0000 UTC m=+0.119553450 container start 0ee09c62fa7ac0c3b0c379604134e2809e84c3bc8cb35b743fd71ab2ff465a38 (image=quay.io/ceph/ceph:v19, name=awesome_morse, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:38 compute-0 awesome_morse[80079]: 167 167
Jan 23 09:48:38 compute-0 systemd[1]: libpod-0ee09c62fa7ac0c3b0c379604134e2809e84c3bc8cb35b743fd71ab2ff465a38.scope: Deactivated successfully.
Jan 23 09:48:38 compute-0 podman[80062]: 2026-01-23 09:48:38.757133183 +0000 UTC m=+0.124149835 container attach 0ee09c62fa7ac0c3b0c379604134e2809e84c3bc8cb35b743fd71ab2ff465a38 (image=quay.io/ceph/ceph:v19, name=awesome_morse, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:48:38 compute-0 podman[80062]: 2026-01-23 09:48:38.758570535 +0000 UTC m=+0.125587117 container died 0ee09c62fa7ac0c3b0c379604134e2809e84c3bc8cb35b743fd71ab2ff465a38 (image=quay.io/ceph/ceph:v19, name=awesome_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 1 completed events
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d5e17c11b88c7e08c29ae34ef1726248a36404e8a3d6d4896bc9cfff4c6d03d-merged.mount: Deactivated successfully.
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/502598474' entity='client.admin' 
Jan 23 09:48:38 compute-0 podman[80062]: 2026-01-23 09:48:38.817749027 +0000 UTC m=+0.184765599 container remove 0ee09c62fa7ac0c3b0c379604134e2809e84c3bc8cb35b743fd71ab2ff465a38 (image=quay.io/ceph/ceph:v19, name=awesome_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:48:38 compute-0 systemd[1]: libpod-conmon-0ee09c62fa7ac0c3b0c379604134e2809e84c3bc8cb35b743fd71ab2ff465a38.scope: Deactivated successfully.
Jan 23 09:48:38 compute-0 systemd[1]: libpod-8c847025fe1ee1d9b813a9b8ea0eec8888e1790c372d571a68019761dbc9b2e1.scope: Deactivated successfully.
Jan 23 09:48:38 compute-0 podman[79981]: 2026-01-23 09:48:38.834438985 +0000 UTC m=+0.522854714 container died 8c847025fe1ee1d9b813a9b8ea0eec8888e1790c372d571a68019761dbc9b2e1 (image=quay.io/ceph/ceph:v19, name=elastic_euler, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 09:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-055b323dd4532d067259b5804011b3b7eb0a5149710d9325c397ef8f807615e1-merged.mount: Deactivated successfully.
Jan 23 09:48:38 compute-0 sudo[79989]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:38 compute-0 ansible-async_wrapper.py[78680]: Done in kid B.
Jan 23 09:48:38 compute-0 podman[79981]: 2026-01-23 09:48:38.873579271 +0000 UTC m=+0.561995000 container remove 8c847025fe1ee1d9b813a9b8ea0eec8888e1790c372d571a68019761dbc9b2e1 (image=quay.io/ceph/ceph:v19, name=elastic_euler, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.nbdygh (unknown last config time)...
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.nbdygh (unknown last config time)...
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.nbdygh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nbdygh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 09:48:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:48:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:38 compute-0 systemd[1]: libpod-conmon-8c847025fe1ee1d9b813a9b8ea0eec8888e1790c372d571a68019761dbc9b2e1.scope: Deactivated successfully.
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.nbdygh on compute-0
Jan 23 09:48:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.nbdygh on compute-0
Jan 23 09:48:38 compute-0 sudo[79930]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:38 compute-0 sudo[80109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:38 compute-0 sudo[80109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:38 compute-0 sudo[80109]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:39 compute-0 sudo[80134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:48:39 compute-0 sudo[80134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:39 compute-0 sudo[80181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlorhkkqczzggmyzcbgpwjerepvebpyu ; /usr/bin/python3'
Jan 23 09:48:39 compute-0 sudo[80181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 09:48:39 compute-0 ceph-mon[74335]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/502598474' entity='client.admin' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nbdygh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:39 compute-0 python3[80184]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:48:39 compute-0 podman[80185]: 2026-01-23 09:48:39.225526702 +0000 UTC m=+0.038000743 container create d629cf937bd27ab5851720c95b5e2f1c42cf59e6bde0106ceb9b7a4128b06587 (image=quay.io/ceph/ceph:v19, name=stoic_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:48:39 compute-0 systemd[1]: Started libpod-conmon-d629cf937bd27ab5851720c95b5e2f1c42cf59e6bde0106ceb9b7a4128b06587.scope.
Jan 23 09:48:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebc4278c9963eee20e9b040e17bedec933981f3365141a34b1a471ac00cf2ba/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebc4278c9963eee20e9b040e17bedec933981f3365141a34b1a471ac00cf2ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebc4278c9963eee20e9b040e17bedec933981f3365141a34b1a471ac00cf2ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:39 compute-0 podman[80185]: 2026-01-23 09:48:39.305687148 +0000 UTC m=+0.118161209 container init d629cf937bd27ab5851720c95b5e2f1c42cf59e6bde0106ceb9b7a4128b06587 (image=quay.io/ceph/ceph:v19, name=stoic_cannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 09:48:39 compute-0 podman[80185]: 2026-01-23 09:48:39.209572745 +0000 UTC m=+0.022046816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:39 compute-0 podman[80185]: 2026-01-23 09:48:39.311420756 +0000 UTC m=+0.123894807 container start d629cf937bd27ab5851720c95b5e2f1c42cf59e6bde0106ceb9b7a4128b06587 (image=quay.io/ceph/ceph:v19, name=stoic_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:48:39 compute-0 podman[80185]: 2026-01-23 09:48:39.316255998 +0000 UTC m=+0.128730039 container attach d629cf937bd27ab5851720c95b5e2f1c42cf59e6bde0106ceb9b7a4128b06587 (image=quay.io/ceph/ceph:v19, name=stoic_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 09:48:39 compute-0 podman[80220]: 2026-01-23 09:48:39.331844634 +0000 UTC m=+0.051146828 container create 09a489057df437941748bb4c737883851f9cfe98089ef00842067fafc029c50f (image=quay.io/ceph/ceph:v19, name=infallible_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:48:39 compute-0 systemd[1]: Started libpod-conmon-09a489057df437941748bb4c737883851f9cfe98089ef00842067fafc029c50f.scope.
Jan 23 09:48:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:39 compute-0 podman[80220]: 2026-01-23 09:48:39.400219315 +0000 UTC m=+0.119521529 container init 09a489057df437941748bb4c737883851f9cfe98089ef00842067fafc029c50f (image=quay.io/ceph/ceph:v19, name=infallible_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:48:39 compute-0 podman[80220]: 2026-01-23 09:48:39.40620807 +0000 UTC m=+0.125510264 container start 09a489057df437941748bb4c737883851f9cfe98089ef00842067fafc029c50f (image=quay.io/ceph/ceph:v19, name=infallible_tu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 09:48:39 compute-0 podman[80220]: 2026-01-23 09:48:39.409292651 +0000 UTC m=+0.128594865 container attach 09a489057df437941748bb4c737883851f9cfe98089ef00842067fafc029c50f (image=quay.io/ceph/ceph:v19, name=infallible_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:48:39 compute-0 infallible_tu[80238]: 167 167
Jan 23 09:48:39 compute-0 systemd[1]: libpod-09a489057df437941748bb4c737883851f9cfe98089ef00842067fafc029c50f.scope: Deactivated successfully.
Jan 23 09:48:39 compute-0 podman[80220]: 2026-01-23 09:48:39.411108524 +0000 UTC m=+0.130410718 container died 09a489057df437941748bb4c737883851f9cfe98089ef00842067fafc029c50f (image=quay.io/ceph/ceph:v19, name=infallible_tu, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:39 compute-0 podman[80220]: 2026-01-23 09:48:39.314445615 +0000 UTC m=+0.033747829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c54d0f380c929c6a6de5903526bca95d139a70290f3b8053a79dc31f5b1f1eea-merged.mount: Deactivated successfully.
Jan 23 09:48:39 compute-0 podman[80220]: 2026-01-23 09:48:39.448619282 +0000 UTC m=+0.167921466 container remove 09a489057df437941748bb4c737883851f9cfe98089ef00842067fafc029c50f (image=quay.io/ceph/ceph:v19, name=infallible_tu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 09:48:39 compute-0 systemd[1]: libpod-conmon-09a489057df437941748bb4c737883851f9cfe98089ef00842067fafc029c50f.scope: Deactivated successfully.
Jan 23 09:48:39 compute-0 sudo[80134]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:48:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:48:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:48:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:48:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:48:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 sudo[80273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:48:39 compute-0 sudo[80273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:39 compute-0 sudo[80273]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 23 09:48:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3216964111' entity='client.admin' 
Jan 23 09:48:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:48:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:48:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:48:39 compute-0 systemd[1]: libpod-d629cf937bd27ab5851720c95b5e2f1c42cf59e6bde0106ceb9b7a4128b06587.scope: Deactivated successfully.
Jan 23 09:48:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:39 compute-0 podman[80185]: 2026-01-23 09:48:39.711997491 +0000 UTC m=+0.524471532 container died d629cf937bd27ab5851720c95b5e2f1c42cf59e6bde0106ceb9b7a4128b06587 (image=quay.io/ceph/ceph:v19, name=stoic_cannon, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ebc4278c9963eee20e9b040e17bedec933981f3365141a34b1a471ac00cf2ba-merged.mount: Deactivated successfully.
Jan 23 09:48:39 compute-0 podman[80185]: 2026-01-23 09:48:39.74820798 +0000 UTC m=+0.560682021 container remove d629cf937bd27ab5851720c95b5e2f1c42cf59e6bde0106ceb9b7a4128b06587 (image=quay.io/ceph/ceph:v19, name=stoic_cannon, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:48:39 compute-0 systemd[1]: libpod-conmon-d629cf937bd27ab5851720c95b5e2f1c42cf59e6bde0106ceb9b7a4128b06587.scope: Deactivated successfully.
Jan 23 09:48:39 compute-0 sudo[80301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:48:39 compute-0 sudo[80301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:39 compute-0 sudo[80301]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:39 compute-0 sudo[80181]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:39 compute-0 sudo[80359]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyesfsabmjoepcjsspjtvgcelyswmxnl ; /usr/bin/python3'
Jan 23 09:48:39 compute-0 sudo[80359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:40 compute-0 python3[80361]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:48:40 compute-0 podman[80362]: 2026-01-23 09:48:40.160666602 +0000 UTC m=+0.038870318 container create 09ce4f49321425b6c60e2d04bd602fef92f65e88d25341f418def223521f317e (image=quay.io/ceph/ceph:v19, name=frosty_jemison, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 09:48:40 compute-0 systemd[1]: Started libpod-conmon-09ce4f49321425b6c60e2d04bd602fef92f65e88d25341f418def223521f317e.scope.
Jan 23 09:48:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfbd97bac82256b3d58633eca836eb8274537090d55148d3791b7751573ebee/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfbd97bac82256b3d58633eca836eb8274537090d55148d3791b7751573ebee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfbd97bac82256b3d58633eca836eb8274537090d55148d3791b7751573ebee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:40 compute-0 podman[80362]: 2026-01-23 09:48:40.213635113 +0000 UTC m=+0.091838829 container init 09ce4f49321425b6c60e2d04bd602fef92f65e88d25341f418def223521f317e (image=quay.io/ceph/ceph:v19, name=frosty_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 09:48:40 compute-0 podman[80362]: 2026-01-23 09:48:40.219821304 +0000 UTC m=+0.098025020 container start 09ce4f49321425b6c60e2d04bd602fef92f65e88d25341f418def223521f317e (image=quay.io/ceph/ceph:v19, name=frosty_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 09:48:40 compute-0 podman[80362]: 2026-01-23 09:48:40.223327666 +0000 UTC m=+0.101531402 container attach 09ce4f49321425b6c60e2d04bd602fef92f65e88d25341f418def223521f317e (image=quay.io/ceph/ceph:v19, name=frosty_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 09:48:40 compute-0 podman[80362]: 2026-01-23 09:48:40.144262832 +0000 UTC m=+0.022466568 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:40 compute-0 ceph-mon[74335]: Reconfiguring mgr.compute-0.nbdygh (unknown last config time)...
Jan 23 09:48:40 compute-0 ceph-mon[74335]: Reconfiguring daemon mgr.compute-0.nbdygh on compute-0
Jan 23 09:48:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:40 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3216964111' entity='client.admin' 
Jan 23 09:48:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 23 09:48:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/248593483' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 23 09:48:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 23 09:48:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:48:41 compute-0 ceph-mon[74335]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/248593483' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 23 09:48:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/248593483' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 23 09:48:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 23 09:48:41 compute-0 frosty_jemison[80377]: set require_min_compat_client to mimic
Jan 23 09:48:41 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 23 09:48:41 compute-0 systemd[1]: libpod-09ce4f49321425b6c60e2d04bd602fef92f65e88d25341f418def223521f317e.scope: Deactivated successfully.
Jan 23 09:48:41 compute-0 podman[80362]: 2026-01-23 09:48:41.798097638 +0000 UTC m=+1.676301354 container died 09ce4f49321425b6c60e2d04bd602fef92f65e88d25341f418def223521f317e (image=quay.io/ceph/ceph:v19, name=frosty_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 09:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecfbd97bac82256b3d58633eca836eb8274537090d55148d3791b7751573ebee-merged.mount: Deactivated successfully.
Jan 23 09:48:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:42 compute-0 podman[80362]: 2026-01-23 09:48:42.068570184 +0000 UTC m=+1.946773900 container remove 09ce4f49321425b6c60e2d04bd602fef92f65e88d25341f418def223521f317e (image=quay.io/ceph/ceph:v19, name=frosty_jemison, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:48:42 compute-0 systemd[1]: libpod-conmon-09ce4f49321425b6c60e2d04bd602fef92f65e88d25341f418def223521f317e.scope: Deactivated successfully.
Jan 23 09:48:42 compute-0 sudo[80359]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:42 compute-0 sudo[80436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbiempbqxgcflnlkwlyshdidzstvtbop ; /usr/bin/python3'
Jan 23 09:48:42 compute-0 sudo[80436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:42 compute-0 python3[80438]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:48:42 compute-0 podman[80439]: 2026-01-23 09:48:42.710032641 +0000 UTC m=+0.039371241 container create 0efe8730b89749eaaa3e7b346da655f38e4bf595754a336c399a5a8728d59539 (image=quay.io/ceph/ceph:v19, name=brave_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 09:48:42 compute-0 systemd[1]: Started libpod-conmon-0efe8730b89749eaaa3e7b346da655f38e4bf595754a336c399a5a8728d59539.scope.
Jan 23 09:48:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c13f4a64e68922a7ffbdb470b9272c4770bd6ac81c0b46597a2fc65a18c57fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c13f4a64e68922a7ffbdb470b9272c4770bd6ac81c0b46597a2fc65a18c57fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c13f4a64e68922a7ffbdb470b9272c4770bd6ac81c0b46597a2fc65a18c57fe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:42 compute-0 podman[80439]: 2026-01-23 09:48:42.78037915 +0000 UTC m=+0.109717780 container init 0efe8730b89749eaaa3e7b346da655f38e4bf595754a336c399a5a8728d59539 (image=quay.io/ceph/ceph:v19, name=brave_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 09:48:42 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/248593483' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 23 09:48:42 compute-0 ceph-mon[74335]: osdmap e3: 0 total, 0 up, 0 in
Jan 23 09:48:42 compute-0 ceph-mon[74335]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:42 compute-0 podman[80439]: 2026-01-23 09:48:42.786722244 +0000 UTC m=+0.116060834 container start 0efe8730b89749eaaa3e7b346da655f38e4bf595754a336c399a5a8728d59539 (image=quay.io/ceph/ceph:v19, name=brave_banach, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 09:48:42 compute-0 podman[80439]: 2026-01-23 09:48:42.693308392 +0000 UTC m=+0.022647012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:42 compute-0 podman[80439]: 2026-01-23 09:48:42.790283711 +0000 UTC m=+0.119622311 container attach 0efe8730b89749eaaa3e7b346da655f38e4bf595754a336c399a5a8728d59539 (image=quay.io/ceph/ceph:v19, name=brave_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:48:43 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:43 compute-0 sudo[80479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:48:43 compute-0 sudo[80479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:43 compute-0 sudo[80479]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:43 compute-0 sudo[80504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Jan 23 09:48:43 compute-0 sudo[80504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:43 compute-0 sudo[80504]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 09:48:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 09:48:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 09:48:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 09:48:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:43 compute-0 ceph-mgr[74633]: [cephadm INFO root] Added host compute-0
Jan 23 09:48:43 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 23 09:48:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:48:43 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:48:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:48:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:43 compute-0 sudo[80548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:48:43 compute-0 sudo[80548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:48:43 compute-0 sudo[80548]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:43 compute-0 ceph-mon[74335]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:48:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:48:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:48:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:44 compute-0 ceph-mon[74335]: Added host compute-0
Jan 23 09:48:44 compute-0 ceph-mon[74335]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:44 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 23 09:48:44 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 23 09:48:45 compute-0 ceph-mon[74335]: Deploying cephadm binary to compute-1
Jan 23 09:48:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:46 compute-0 ceph-mon[74335]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:48 compute-0 ceph-mon[74335]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 09:48:48 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:48 compute-0 ceph-mgr[74633]: [cephadm INFO root] Added host compute-1
Jan 23 09:48:48 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 23 09:48:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:48:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:48:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:50 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 23 09:48:50 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 23 09:48:50 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:50 compute-0 ceph-mon[74335]: Added host compute-1
Jan 23 09:48:50 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:48:51 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:51 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:51 compute-0 ceph-mon[74335]: Deploying cephadm binary to compute-2
Jan 23 09:48:51 compute-0 ceph-mon[74335]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:51 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:52 compute-0 ceph-mon[74335]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 23 09:48:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: [cephadm INFO root] Added host compute-2
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 23 09:48:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 09:48:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 23 09:48:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 09:48:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 23 09:48:53 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 23 09:48:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 23 09:48:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:53 compute-0 brave_banach[80455]: Added host 'compute-0' with addr '192.168.122.100'
Jan 23 09:48:53 compute-0 brave_banach[80455]: Added host 'compute-1' with addr '192.168.122.101'
Jan 23 09:48:53 compute-0 brave_banach[80455]: Added host 'compute-2' with addr '192.168.122.102'
Jan 23 09:48:53 compute-0 brave_banach[80455]: Scheduled mon update...
Jan 23 09:48:53 compute-0 brave_banach[80455]: Scheduled mgr update...
Jan 23 09:48:53 compute-0 brave_banach[80455]: Scheduled osd.default_drive_group update...
Jan 23 09:48:54 compute-0 systemd[1]: libpod-0efe8730b89749eaaa3e7b346da655f38e4bf595754a336c399a5a8728d59539.scope: Deactivated successfully.
Jan 23 09:48:54 compute-0 podman[80439]: 2026-01-23 09:48:54.007873771 +0000 UTC m=+11.337212391 container died 0efe8730b89749eaaa3e7b346da655f38e4bf595754a336c399a5a8728d59539 (image=quay.io/ceph/ceph:v19, name=brave_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c13f4a64e68922a7ffbdb470b9272c4770bd6ac81c0b46597a2fc65a18c57fe-merged.mount: Deactivated successfully.
Jan 23 09:48:54 compute-0 podman[80439]: 2026-01-23 09:48:54.05668408 +0000 UTC m=+11.386022680 container remove 0efe8730b89749eaaa3e7b346da655f38e4bf595754a336c399a5a8728d59539 (image=quay.io/ceph/ceph:v19, name=brave_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:48:54 compute-0 systemd[1]: libpod-conmon-0efe8730b89749eaaa3e7b346da655f38e4bf595754a336c399a5a8728d59539.scope: Deactivated successfully.
Jan 23 09:48:54 compute-0 sudo[80436]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:54 compute-0 sudo[80611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skbgpyovqldgcizmgcsmhjvxepsjbwdu ; /usr/bin/python3'
Jan 23 09:48:54 compute-0 sudo[80611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:48:54 compute-0 python3[80613]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:48:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:54 compute-0 podman[80615]: 2026-01-23 09:48:54.573815989 +0000 UTC m=+0.045743345 container create 601abe6ac3986fd3bf206364782ee4708b1e0fbb4e61af549069a6443eb531a6 (image=quay.io/ceph/ceph:v19, name=peaceful_keldysh, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Jan 23 09:48:54 compute-0 systemd[1]: Started libpod-conmon-601abe6ac3986fd3bf206364782ee4708b1e0fbb4e61af549069a6443eb531a6.scope.
Jan 23 09:48:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:48:54 compute-0 podman[80615]: 2026-01-23 09:48:54.554909241 +0000 UTC m=+0.026836617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9866d2c6832b2c240cc237df16f632362ce2737e718901e10e6baea32c2873/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9866d2c6832b2c240cc237df16f632362ce2737e718901e10e6baea32c2873/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9866d2c6832b2c240cc237df16f632362ce2737e718901e10e6baea32c2873/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:48:54 compute-0 podman[80615]: 2026-01-23 09:48:54.663107407 +0000 UTC m=+0.135034783 container init 601abe6ac3986fd3bf206364782ee4708b1e0fbb4e61af549069a6443eb531a6 (image=quay.io/ceph/ceph:v19, name=peaceful_keldysh, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 09:48:54 compute-0 podman[80615]: 2026-01-23 09:48:54.669895444 +0000 UTC m=+0.141822800 container start 601abe6ac3986fd3bf206364782ee4708b1e0fbb4e61af549069a6443eb531a6 (image=quay.io/ceph/ceph:v19, name=peaceful_keldysh, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 09:48:54 compute-0 podman[80615]: 2026-01-23 09:48:54.676142445 +0000 UTC m=+0.148069801 container attach 601abe6ac3986fd3bf206364782ee4708b1e0fbb4e61af549069a6443eb531a6 (image=quay.io/ceph/ceph:v19, name=peaceful_keldysh, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 23 09:48:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:54 compute-0 ceph-mon[74335]: Added host compute-2
Jan 23 09:48:54 compute-0 ceph-mon[74335]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 23 09:48:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:54 compute-0 ceph-mon[74335]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 23 09:48:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:54 compute-0 ceph-mon[74335]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 23 09:48:54 compute-0 ceph-mon[74335]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 23 09:48:54 compute-0 ceph-mon[74335]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 23 09:48:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:48:54 compute-0 ceph-mon[74335]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 23 09:48:55 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/482377098' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:48:55 compute-0 peaceful_keldysh[80631]: 
Jan 23 09:48:55 compute-0 peaceful_keldysh[80631]: {"fsid":"f3005f84-239a-55b6-a948-8f1fb592b920","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":73,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-23T09:47:38:565964+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-23T09:47:38.571725+0000","services":{}},"progress_events":{}}
Jan 23 09:48:55 compute-0 systemd[1]: libpod-601abe6ac3986fd3bf206364782ee4708b1e0fbb4e61af549069a6443eb531a6.scope: Deactivated successfully.
Jan 23 09:48:55 compute-0 podman[80615]: 2026-01-23 09:48:55.142940753 +0000 UTC m=+0.614868109 container died 601abe6ac3986fd3bf206364782ee4708b1e0fbb4e61af549069a6443eb531a6 (image=quay.io/ceph/ceph:v19, name=peaceful_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 09:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac9866d2c6832b2c240cc237df16f632362ce2737e718901e10e6baea32c2873-merged.mount: Deactivated successfully.
Jan 23 09:48:55 compute-0 podman[80615]: 2026-01-23 09:48:55.191082643 +0000 UTC m=+0.663009999 container remove 601abe6ac3986fd3bf206364782ee4708b1e0fbb4e61af549069a6443eb531a6 (image=quay.io/ceph/ceph:v19, name=peaceful_keldysh, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 23 09:48:55 compute-0 systemd[1]: libpod-conmon-601abe6ac3986fd3bf206364782ee4708b1e0fbb4e61af549069a6443eb531a6.scope: Deactivated successfully.
Jan 23 09:48:55 compute-0 sudo[80611]: pam_unix(sudo:session): session closed for user root
Jan 23 09:48:56 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/482377098' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:48:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:48:57 compute-0 ceph-mon[74335]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:48:58 compute-0 ceph-mon[74335]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:00 compute-0 ceph-mon[74335]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:02 compute-0 ceph-mon[74335]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:05 compute-0 ceph-mon[74335]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:06 compute-0 ceph-mon[74335]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:49:08
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [balancer INFO root] No pools available
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:49:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:49:09 compute-0 ceph-mon[74335]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:10 compute-0 ceph-mon[74335]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:12 compute-0 ceph-mon[74335]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:14 compute-0 ceph-mon[74335]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:16 compute-0 ceph-mon[74335]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:18 compute-0 ceph-mon[74335]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:49:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:49:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:49:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:49:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 23 09:49:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:49:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:49:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:49:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:49:19 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:49:19 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:49:19 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:49:19 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:49:20 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:20 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:20 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:20 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:20 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:49:20 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:20 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:49:20 compute-0 ceph-mon[74335]: Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:49:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:20 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:49:20 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:49:21 compute-0 ceph-mon[74335]: Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:49:21 compute-0 ceph-mon[74335]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:49:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:49:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:49:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev cdbdfb8a-a0e1-40bf-b34f-52ff94e66345 (Updating crash deployment (+1 -> 2))
Jan 23 09:49:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 23 09:49:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:49:21.696+0000 7fa5594b7640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: service_name: mon
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: placement:
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   hosts:
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   - compute-0
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   - compute-1
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   - compute-2
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:49:21.696+0000 7fa5594b7640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: service_name: mgr
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: placement:
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   hosts:
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   - compute-0
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   - compute-1
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   - compute-2
Jan 23 09:49:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 23 09:49:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 09:49:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:49:21 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 23 09:49:21 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 23 09:49:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:22 compute-0 ceph-mon[74335]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:49:22 compute-0 ceph-mon[74335]: Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:49:22 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:22 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:22 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:22 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:49:22 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 09:49:22 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:22 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 23 09:49:23 compute-0 ceph-mon[74335]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 23 09:49:23 compute-0 ceph-mon[74335]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:23 compute-0 ceph-mon[74335]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 23 09:49:23 compute-0 ceph-mon[74335]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:23 compute-0 ceph-mon[74335]: Deploying daemon crash.compute-1 on compute-1
Jan 23 09:49:23 compute-0 ceph-mon[74335]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 23 09:49:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:49:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:49:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 23 09:49:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:25 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev cdbdfb8a-a0e1-40bf-b34f-52ff94e66345 (Updating crash deployment (+1 -> 2))
Jan 23 09:49:25 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event cdbdfb8a-a0e1-40bf-b34f-52ff94e66345 (Updating crash deployment (+1 -> 2)) in 3 seconds
Jan 23 09:49:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 23 09:49:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 09:49:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:49:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:49:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:49:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:49:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:49:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:49:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:49:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:25 compute-0 sudo[80692]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vddbsvvmsctvxfkwolgljmtzojdjiudj ; /usr/bin/python3'
Jan 23 09:49:25 compute-0 sudo[80692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:49:25 compute-0 sudo[80694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:49:25 compute-0 sudo[80694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:25 compute-0 sudo[80694]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:25 compute-0 sudo[80720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 09:49:25 compute-0 sudo[80720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:25 compute-0 python3[80695]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:49:25 compute-0 podman[80746]: 2026-01-23 09:49:25.494110919 +0000 UTC m=+0.026252020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:49:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:25 compute-0 ceph-mon[74335]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:49:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:49:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:49:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:25 compute-0 podman[80746]: 2026-01-23 09:49:25.768141143 +0000 UTC m=+0.300282234 container create 9c9667e78d6495cddfb23bb1da72f2070d41f0faf760d8ee28f069eb53fa7cea (image=quay.io/ceph/ceph:v19, name=cool_chandrasekhar, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:49:25 compute-0 systemd[1]: Started libpod-conmon-9c9667e78d6495cddfb23bb1da72f2070d41f0faf760d8ee28f069eb53fa7cea.scope.
Jan 23 09:49:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2671be4608ac10011f81351f10965ba4e752949f416afa4125f4df7959f47f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2671be4608ac10011f81351f10965ba4e752949f416afa4125f4df7959f47f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2671be4608ac10011f81351f10965ba4e752949f416afa4125f4df7959f47f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:25 compute-0 podman[80746]: 2026-01-23 09:49:25.897611162 +0000 UTC m=+0.429752273 container init 9c9667e78d6495cddfb23bb1da72f2070d41f0faf760d8ee28f069eb53fa7cea (image=quay.io/ceph/ceph:v19, name=cool_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Jan 23 09:49:25 compute-0 podman[80746]: 2026-01-23 09:49:25.904241584 +0000 UTC m=+0.436382675 container start 9c9667e78d6495cddfb23bb1da72f2070d41f0faf760d8ee28f069eb53fa7cea (image=quay.io/ceph/ceph:v19, name=cool_chandrasekhar, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:49:25 compute-0 podman[80746]: 2026-01-23 09:49:25.964306361 +0000 UTC m=+0.496447452 container attach 9c9667e78d6495cddfb23bb1da72f2070d41f0faf760d8ee28f069eb53fa7cea (image=quay.io/ceph/ceph:v19, name=cool_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:49:26 compute-0 podman[80822]: 2026-01-23 09:49:26.111958539 +0000 UTC m=+0.043826662 container create a2fed5d537920a79874c8a32b63f333527f1deca892654ce163574f81df3812a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:49:26 compute-0 systemd[1]: Started libpod-conmon-a2fed5d537920a79874c8a32b63f333527f1deca892654ce163574f81df3812a.scope.
Jan 23 09:49:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:26 compute-0 podman[80822]: 2026-01-23 09:49:26.181870916 +0000 UTC m=+0.113739049 container init a2fed5d537920a79874c8a32b63f333527f1deca892654ce163574f81df3812a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 09:49:26 compute-0 podman[80822]: 2026-01-23 09:49:26.187045998 +0000 UTC m=+0.118914121 container start a2fed5d537920a79874c8a32b63f333527f1deca892654ce163574f81df3812a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:49:26 compute-0 podman[80822]: 2026-01-23 09:49:26.093606806 +0000 UTC m=+0.025474949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:26 compute-0 podman[80822]: 2026-01-23 09:49:26.190432511 +0000 UTC m=+0.122300654 container attach a2fed5d537920a79874c8a32b63f333527f1deca892654ce163574f81df3812a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 09:49:26 compute-0 keen_swanson[80838]: 167 167
Jan 23 09:49:26 compute-0 systemd[1]: libpod-a2fed5d537920a79874c8a32b63f333527f1deca892654ce163574f81df3812a.scope: Deactivated successfully.
Jan 23 09:49:26 compute-0 podman[80822]: 2026-01-23 09:49:26.193202727 +0000 UTC m=+0.125070850 container died a2fed5d537920a79874c8a32b63f333527f1deca892654ce163574f81df3812a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 09:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d06794f8d3a32815cb055ae01ec60b35c9d8f8ee0ce0e81e2add816407050810-merged.mount: Deactivated successfully.
Jan 23 09:49:26 compute-0 podman[80822]: 2026-01-23 09:49:26.236413471 +0000 UTC m=+0.168281594 container remove a2fed5d537920a79874c8a32b63f333527f1deca892654ce163574f81df3812a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_swanson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 09:49:26 compute-0 systemd[1]: libpod-conmon-a2fed5d537920a79874c8a32b63f333527f1deca892654ce163574f81df3812a.scope: Deactivated successfully.
Jan 23 09:49:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 23 09:49:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1852143948' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:49:26 compute-0 cool_chandrasekhar[80774]: 
Jan 23 09:49:26 compute-0 cool_chandrasekhar[80774]: {"fsid":"f3005f84-239a-55b6-a948-8f1fb592b920","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false},"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":104,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-23T09:47:38:565964+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-23T09:49:10.551708+0000","services":{}},"progress_events":{}}
Jan 23 09:49:26 compute-0 systemd[1]: libpod-9c9667e78d6495cddfb23bb1da72f2070d41f0faf760d8ee28f069eb53fa7cea.scope: Deactivated successfully.
Jan 23 09:49:26 compute-0 podman[80746]: 2026-01-23 09:49:26.380310007 +0000 UTC m=+0.912451108 container died 9c9667e78d6495cddfb23bb1da72f2070d41f0faf760d8ee28f069eb53fa7cea (image=quay.io/ceph/ceph:v19, name=cool_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:49:26 compute-0 podman[80861]: 2026-01-23 09:49:26.394251439 +0000 UTC m=+0.044415629 container create d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 09:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e2671be4608ac10011f81351f10965ba4e752949f416afa4125f4df7959f47f-merged.mount: Deactivated successfully.
Jan 23 09:49:26 compute-0 podman[80746]: 2026-01-23 09:49:26.427073239 +0000 UTC m=+0.959214330 container remove 9c9667e78d6495cddfb23bb1da72f2070d41f0faf760d8ee28f069eb53fa7cea (image=quay.io/ceph/ceph:v19, name=cool_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 09:49:26 compute-0 systemd[1]: Started libpod-conmon-d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae.scope.
Jan 23 09:49:26 compute-0 systemd[1]: libpod-conmon-9c9667e78d6495cddfb23bb1da72f2070d41f0faf760d8ee28f069eb53fa7cea.scope: Deactivated successfully.
Jan 23 09:49:26 compute-0 sudo[80692]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cf133356736304ed7cc8de167f251658ab73fc6957d36887f8f99bffd57220/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cf133356736304ed7cc8de167f251658ab73fc6957d36887f8f99bffd57220/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cf133356736304ed7cc8de167f251658ab73fc6957d36887f8f99bffd57220/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cf133356736304ed7cc8de167f251658ab73fc6957d36887f8f99bffd57220/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cf133356736304ed7cc8de167f251658ab73fc6957d36887f8f99bffd57220/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:26 compute-0 podman[80861]: 2026-01-23 09:49:26.374712283 +0000 UTC m=+0.024876503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:26 compute-0 podman[80861]: 2026-01-23 09:49:26.478485139 +0000 UTC m=+0.128649329 container init d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_vaughan, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 09:49:26 compute-0 podman[80861]: 2026-01-23 09:49:26.493831659 +0000 UTC m=+0.143995849 container start d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 09:49:26 compute-0 podman[80861]: 2026-01-23 09:49:26.499126015 +0000 UTC m=+0.149290235 container attach d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_vaughan, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 09:49:26 compute-0 inspiring_vaughan[80890]: --> passed data devices: 0 physical, 1 LVM
Jan 23 09:49:26 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:26 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:26 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e272688e-6b15-4719-9011-a7e7310819a5
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "92663454-00ec-4b9a-bcda-939cb5c501aa"} v 0)
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2566627347' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "92663454-00ec-4b9a-bcda-939cb5c501aa"}]: dispatch
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2566627347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "92663454-00ec-4b9a-bcda-939cb5c501aa"}]': finished
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:27 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:27 compute-0 ceph-mon[74335]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1852143948' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:49:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2566627347' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "92663454-00ec-4b9a-bcda-939cb5c501aa"}]: dispatch
Jan 23 09:49:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2566627347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "92663454-00ec-4b9a-bcda-939cb5c501aa"}]': finished
Jan 23 09:49:27 compute-0 ceph-mon[74335]: osdmap e4: 1 total, 0 up, 1 in
Jan 23 09:49:27 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e272688e-6b15-4719-9011-a7e7310819a5"} v 0)
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2982047325' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e272688e-6b15-4719-9011-a7e7310819a5"}]: dispatch
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2982047325' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e272688e-6b15-4719-9011-a7e7310819a5"}]': finished
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:27 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:27 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:27 compute-0 lvm[80953]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:49:27 compute-0 lvm[80953]: VG ceph_vg0 finished
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 23 09:49:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1623226064' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 23 09:49:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 23 09:49:27 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329281084' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]:  stderr: got monmap epoch 1
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]: --> Creating keyring file for osd.1
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 23 09:49:27 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid e272688e-6b15-4719-9011-a7e7310819a5 --setuser ceph --setgroup ceph
Jan 23 09:49:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2982047325' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e272688e-6b15-4719-9011-a7e7310819a5"}]: dispatch
Jan 23 09:49:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2982047325' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e272688e-6b15-4719-9011-a7e7310819a5"}]': finished
Jan 23 09:49:28 compute-0 ceph-mon[74335]: osdmap e5: 2 total, 0 up, 2 in
Jan 23 09:49:28 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:28 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1623226064' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 23 09:49:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/329281084' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 23 09:49:28 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 23 09:49:28 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 2 completed events
Jan 23 09:49:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:49:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:29 compute-0 ceph-mon[74335]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:29 compute-0 ceph-mon[74335]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 23 09:49:29 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]:  stderr: 2026-01-23T09:49:28.022+0000 7f95fff68740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]:  stderr: 2026-01-23T09:49:28.288+0000 7f95fff68740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 23 09:49:31 compute-0 inspiring_vaughan[80890]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 23 09:49:31 compute-0 systemd[1]: libpod-d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae.scope: Deactivated successfully.
Jan 23 09:49:31 compute-0 systemd[1]: libpod-d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae.scope: Consumed 2.101s CPU time.
Jan 23 09:49:31 compute-0 podman[80861]: 2026-01-23 09:49:31.448727985 +0000 UTC m=+5.098892175 container died d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 09:49:31 compute-0 ceph-mon[74335]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-71cf133356736304ed7cc8de167f251658ab73fc6957d36887f8f99bffd57220-merged.mount: Deactivated successfully.
Jan 23 09:49:31 compute-0 podman[80861]: 2026-01-23 09:49:31.881610994 +0000 UTC m=+5.531775184 container remove d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 23 09:49:31 compute-0 systemd[1]: libpod-conmon-d225602a502d75110f3639149079d5f9ec3b6ea46f807e006b896bfb31aaccae.scope: Deactivated successfully.
Jan 23 09:49:31 compute-0 sudo[80720]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:31 compute-0 sudo[81872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:49:32 compute-0 sudo[81872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:32 compute-0 sudo[81872]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:32 compute-0 sudo[81897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 09:49:32 compute-0 sudo[81897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:32 compute-0 podman[81957]: 2026-01-23 09:49:32.412988474 +0000 UTC m=+0.023149216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:32 compute-0 podman[81957]: 2026-01-23 09:49:32.639222357 +0000 UTC m=+0.249383089 container create 43b8a3f3d44416870ab26dc25b49a79e9d72ca4a0f5875cbbd07545c741000df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_rubin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 09:49:32 compute-0 systemd[1]: Started libpod-conmon-43b8a3f3d44416870ab26dc25b49a79e9d72ca4a0f5875cbbd07545c741000df.scope.
Jan 23 09:49:32 compute-0 ceph-mon[74335]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:32 compute-0 podman[81957]: 2026-01-23 09:49:32.721576875 +0000 UTC m=+0.331737627 container init 43b8a3f3d44416870ab26dc25b49a79e9d72ca4a0f5875cbbd07545c741000df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 09:49:32 compute-0 podman[81957]: 2026-01-23 09:49:32.730291964 +0000 UTC m=+0.340452686 container start 43b8a3f3d44416870ab26dc25b49a79e9d72ca4a0f5875cbbd07545c741000df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_rubin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:49:32 compute-0 wonderful_rubin[81974]: 167 167
Jan 23 09:49:32 compute-0 systemd[1]: libpod-43b8a3f3d44416870ab26dc25b49a79e9d72ca4a0f5875cbbd07545c741000df.scope: Deactivated successfully.
Jan 23 09:49:32 compute-0 podman[81957]: 2026-01-23 09:49:32.737495562 +0000 UTC m=+0.347656304 container attach 43b8a3f3d44416870ab26dc25b49a79e9d72ca4a0f5875cbbd07545c741000df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_rubin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:49:32 compute-0 podman[81957]: 2026-01-23 09:49:32.740154625 +0000 UTC m=+0.350315357 container died 43b8a3f3d44416870ab26dc25b49a79e9d72ca4a0f5875cbbd07545c741000df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_rubin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:49:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9015b8d3a4d55f04e53844783b266e8c640bec650b3c99dcc7a6aff45943ca55-merged.mount: Deactivated successfully.
Jan 23 09:49:32 compute-0 podman[81957]: 2026-01-23 09:49:32.793075565 +0000 UTC m=+0.403236287 container remove 43b8a3f3d44416870ab26dc25b49a79e9d72ca4a0f5875cbbd07545c741000df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_rubin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 09:49:32 compute-0 systemd[1]: libpod-conmon-43b8a3f3d44416870ab26dc25b49a79e9d72ca4a0f5875cbbd07545c741000df.scope: Deactivated successfully.
Jan 23 09:49:32 compute-0 podman[81997]: 2026-01-23 09:49:32.957154464 +0000 UTC m=+0.050457504 container create 292b4637621dffe1c891e822d6bb58bd601e79108a0f463cc71c735ce684de41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 09:49:32 compute-0 systemd[1]: Started libpod-conmon-292b4637621dffe1c891e822d6bb58bd601e79108a0f463cc71c735ce684de41.scope.
Jan 23 09:49:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dab02eb74a9776c05ff2d15b1fe028cea3c15ae229a3e9ac4116c83406fd255/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dab02eb74a9776c05ff2d15b1fe028cea3c15ae229a3e9ac4116c83406fd255/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dab02eb74a9776c05ff2d15b1fe028cea3c15ae229a3e9ac4116c83406fd255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dab02eb74a9776c05ff2d15b1fe028cea3c15ae229a3e9ac4116c83406fd255/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:33 compute-0 podman[81997]: 2026-01-23 09:49:32.935044418 +0000 UTC m=+0.028347268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:33 compute-0 podman[81997]: 2026-01-23 09:49:33.032807468 +0000 UTC m=+0.126110318 container init 292b4637621dffe1c891e822d6bb58bd601e79108a0f463cc71c735ce684de41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:49:33 compute-0 podman[81997]: 2026-01-23 09:49:33.039312357 +0000 UTC m=+0.132615187 container start 292b4637621dffe1c891e822d6bb58bd601e79108a0f463cc71c735ce684de41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 09:49:33 compute-0 podman[81997]: 2026-01-23 09:49:33.044304174 +0000 UTC m=+0.137607014 container attach 292b4637621dffe1c891e822d6bb58bd601e79108a0f463cc71c735ce684de41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:49:33 compute-0 cranky_kare[82013]: {
Jan 23 09:49:33 compute-0 cranky_kare[82013]:     "1": [
Jan 23 09:49:33 compute-0 cranky_kare[82013]:         {
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "devices": [
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "/dev/loop3"
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             ],
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "lv_name": "ceph_lv0",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "lv_size": "21470642176",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "name": "ceph_lv0",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "tags": {
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.cluster_name": "ceph",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.crush_device_class": "",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.encrypted": "0",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.osd_id": "1",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.type": "block",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.vdo": "0",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:                 "ceph.with_tpm": "0"
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             },
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "type": "block",
Jan 23 09:49:33 compute-0 cranky_kare[82013]:             "vg_name": "ceph_vg0"
Jan 23 09:49:33 compute-0 cranky_kare[82013]:         }
Jan 23 09:49:33 compute-0 cranky_kare[82013]:     ]
Jan 23 09:49:33 compute-0 cranky_kare[82013]: }
Jan 23 09:49:33 compute-0 systemd[1]: libpod-292b4637621dffe1c891e822d6bb58bd601e79108a0f463cc71c735ce684de41.scope: Deactivated successfully.
Jan 23 09:49:33 compute-0 podman[82022]: 2026-01-23 09:49:33.392016378 +0000 UTC m=+0.027786073 container died 292b4637621dffe1c891e822d6bb58bd601e79108a0f463cc71c735ce684de41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dab02eb74a9776c05ff2d15b1fe028cea3c15ae229a3e9ac4116c83406fd255-merged.mount: Deactivated successfully.
Jan 23 09:49:33 compute-0 podman[82022]: 2026-01-23 09:49:33.437168006 +0000 UTC m=+0.072937671 container remove 292b4637621dffe1c891e822d6bb58bd601e79108a0f463cc71c735ce684de41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 09:49:33 compute-0 systemd[1]: libpod-conmon-292b4637621dffe1c891e822d6bb58bd601e79108a0f463cc71c735ce684de41.scope: Deactivated successfully.
Jan 23 09:49:33 compute-0 sudo[81897]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 23 09:49:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 23 09:49:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:49:33 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:33 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 23 09:49:33 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 23 09:49:33 compute-0 sudo[82037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:49:33 compute-0 sudo[82037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:33 compute-0 sudo[82037]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:33 compute-0 sudo[82062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:49:33 compute-0 sudo[82062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:33 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 23 09:49:33 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:33 compute-0 ceph-mon[74335]: Deploying daemon osd.1 on compute-0
Jan 23 09:49:33 compute-0 podman[82127]: 2026-01-23 09:49:33.991444433 +0000 UTC m=+0.036301316 container create 468807f5831917de36c59757dc2c0a068bcf0ef403403de197e3de5cfcf7e2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_babbage, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:49:34 compute-0 systemd[1]: Started libpod-conmon-468807f5831917de36c59757dc2c0a068bcf0ef403403de197e3de5cfcf7e2c9.scope.
Jan 23 09:49:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:34 compute-0 podman[82127]: 2026-01-23 09:49:34.055797898 +0000 UTC m=+0.100654801 container init 468807f5831917de36c59757dc2c0a068bcf0ef403403de197e3de5cfcf7e2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:49:34 compute-0 podman[82127]: 2026-01-23 09:49:34.063445938 +0000 UTC m=+0.108302821 container start 468807f5831917de36c59757dc2c0a068bcf0ef403403de197e3de5cfcf7e2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:49:34 compute-0 adoring_babbage[82144]: 167 167
Jan 23 09:49:34 compute-0 systemd[1]: libpod-468807f5831917de36c59757dc2c0a068bcf0ef403403de197e3de5cfcf7e2c9.scope: Deactivated successfully.
Jan 23 09:49:34 compute-0 podman[82127]: 2026-01-23 09:49:34.068224559 +0000 UTC m=+0.113081462 container attach 468807f5831917de36c59757dc2c0a068bcf0ef403403de197e3de5cfcf7e2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_babbage, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:49:34 compute-0 podman[82127]: 2026-01-23 09:49:34.069079142 +0000 UTC m=+0.113936025 container died 468807f5831917de36c59757dc2c0a068bcf0ef403403de197e3de5cfcf7e2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_babbage, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 09:49:34 compute-0 podman[82127]: 2026-01-23 09:49:33.976407711 +0000 UTC m=+0.021264624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6193909c43fa960e84f7dcdf39a318cab26e07e19497c54ff75bca6137015acc-merged.mount: Deactivated successfully.
Jan 23 09:49:34 compute-0 podman[82127]: 2026-01-23 09:49:34.108257856 +0000 UTC m=+0.153114729 container remove 468807f5831917de36c59757dc2c0a068bcf0ef403403de197e3de5cfcf7e2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:49:34 compute-0 systemd[1]: libpod-conmon-468807f5831917de36c59757dc2c0a068bcf0ef403403de197e3de5cfcf7e2c9.scope: Deactivated successfully.
Jan 23 09:49:34 compute-0 podman[82174]: 2026-01-23 09:49:34.371935225 +0000 UTC m=+0.068307354 container create 297aab5015e9b440a3afd6aa9075b110e5745deade71da6abfc522bbbaad2852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 09:49:34 compute-0 systemd[1]: Started libpod-conmon-297aab5015e9b440a3afd6aa9075b110e5745deade71da6abfc522bbbaad2852.scope.
Jan 23 09:49:34 compute-0 podman[82174]: 2026-01-23 09:49:34.353758037 +0000 UTC m=+0.050130166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75c4e3959089649bc242983c19a0c351ba46325e66fa6aba81d301c013fe874/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75c4e3959089649bc242983c19a0c351ba46325e66fa6aba81d301c013fe874/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75c4e3959089649bc242983c19a0c351ba46325e66fa6aba81d301c013fe874/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75c4e3959089649bc242983c19a0c351ba46325e66fa6aba81d301c013fe874/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75c4e3959089649bc242983c19a0c351ba46325e66fa6aba81d301c013fe874/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:34 compute-0 podman[82174]: 2026-01-23 09:49:34.448907726 +0000 UTC m=+0.145279865 container init 297aab5015e9b440a3afd6aa9075b110e5745deade71da6abfc522bbbaad2852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:49:34 compute-0 podman[82174]: 2026-01-23 09:49:34.459908227 +0000 UTC m=+0.156280346 container start 297aab5015e9b440a3afd6aa9075b110e5745deade71da6abfc522bbbaad2852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:49:34 compute-0 podman[82174]: 2026-01-23 09:49:34.470829317 +0000 UTC m=+0.167201506 container attach 297aab5015e9b440a3afd6aa9075b110e5745deade71da6abfc522bbbaad2852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 09:49:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate-test[82190]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 23 09:49:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate-test[82190]:                             [--no-systemd] [--no-tmpfs]
Jan 23 09:49:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate-test[82190]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 23 09:49:34 compute-0 systemd[1]: libpod-297aab5015e9b440a3afd6aa9075b110e5745deade71da6abfc522bbbaad2852.scope: Deactivated successfully.
Jan 23 09:49:34 compute-0 podman[82174]: 2026-01-23 09:49:34.67737724 +0000 UTC m=+0.373749359 container died 297aab5015e9b440a3afd6aa9075b110e5745deade71da6abfc522bbbaad2852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate-test, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 09:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b75c4e3959089649bc242983c19a0c351ba46325e66fa6aba81d301c013fe874-merged.mount: Deactivated successfully.
Jan 23 09:49:34 compute-0 ceph-mon[74335]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:34 compute-0 podman[82174]: 2026-01-23 09:49:34.74266533 +0000 UTC m=+0.439037449 container remove 297aab5015e9b440a3afd6aa9075b110e5745deade71da6abfc522bbbaad2852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:49:34 compute-0 systemd[1]: libpod-conmon-297aab5015e9b440a3afd6aa9075b110e5745deade71da6abfc522bbbaad2852.scope: Deactivated successfully.
Jan 23 09:49:34 compute-0 systemd[1]: Reloading.
Jan 23 09:49:35 compute-0 systemd-sysv-generator[82256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:49:35 compute-0 systemd-rc-local-generator[82252]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:49:35 compute-0 systemd[1]: Reloading.
Jan 23 09:49:35 compute-0 systemd-sysv-generator[82295]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:49:35 compute-0 systemd-rc-local-generator[82291]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:49:35 compute-0 systemd[1]: Starting Ceph osd.1 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:49:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:35 compute-0 podman[82355]: 2026-01-23 09:49:35.758136503 +0000 UTC m=+0.046272420 container create c02c0ac460673c976ef824145dc38629542136ac33b187ee4e79b58ae7dc419b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 23 09:49:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a28605621fb0ba85a4e8ae756b96de1b20a12770c7316bc8b4896305b634a96e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a28605621fb0ba85a4e8ae756b96de1b20a12770c7316bc8b4896305b634a96e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a28605621fb0ba85a4e8ae756b96de1b20a12770c7316bc8b4896305b634a96e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a28605621fb0ba85a4e8ae756b96de1b20a12770c7316bc8b4896305b634a96e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a28605621fb0ba85a4e8ae756b96de1b20a12770c7316bc8b4896305b634a96e/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:35 compute-0 podman[82355]: 2026-01-23 09:49:35.83606776 +0000 UTC m=+0.124203697 container init c02c0ac460673c976ef824145dc38629542136ac33b187ee4e79b58ae7dc419b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:49:35 compute-0 podman[82355]: 2026-01-23 09:49:35.741132267 +0000 UTC m=+0.029268214 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:35 compute-0 podman[82355]: 2026-01-23 09:49:35.842016443 +0000 UTC m=+0.130152360 container start c02c0ac460673c976ef824145dc38629542136ac33b187ee4e79b58ae7dc419b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:49:35 compute-0 podman[82355]: 2026-01-23 09:49:35.849703144 +0000 UTC m=+0.137839081 container attach c02c0ac460673c976ef824145dc38629542136ac33b187ee4e79b58ae7dc419b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Jan 23 09:49:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:36 compute-0 bash[82355]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:36 compute-0 bash[82355]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:36 compute-0 lvm[82451]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:49:36 compute-0 lvm[82451]: VG ceph_vg0 finished
Jan 23 09:49:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 23 09:49:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:36 compute-0 bash[82355]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 23 09:49:36 compute-0 bash[82355]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:36 compute-0 bash[82355]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 09:49:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 09:49:36 compute-0 bash[82355]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 09:49:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 23 09:49:36 compute-0 bash[82355]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 23 09:49:36 compute-0 ceph-mon[74335]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:37 compute-0 bash[82355]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:37 compute-0 bash[82355]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 09:49:37 compute-0 bash[82355]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 09:49:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 09:49:37 compute-0 bash[82355]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 23 09:49:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate[82370]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 23 09:49:37 compute-0 bash[82355]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 23 09:49:37 compute-0 systemd[1]: libpod-c02c0ac460673c976ef824145dc38629542136ac33b187ee4e79b58ae7dc419b.scope: Deactivated successfully.
Jan 23 09:49:37 compute-0 systemd[1]: libpod-c02c0ac460673c976ef824145dc38629542136ac33b187ee4e79b58ae7dc419b.scope: Consumed 1.329s CPU time.
Jan 23 09:49:37 compute-0 podman[82355]: 2026-01-23 09:49:37.079899035 +0000 UTC m=+1.368034962 container died c02c0ac460673c976ef824145dc38629542136ac33b187ee4e79b58ae7dc419b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:49:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a28605621fb0ba85a4e8ae756b96de1b20a12770c7316bc8b4896305b634a96e-merged.mount: Deactivated successfully.
Jan 23 09:49:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:49:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:49:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:49:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:49:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:49:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:49:38 compute-0 podman[82355]: 2026-01-23 09:49:38.805798046 +0000 UTC m=+3.093933963 container remove c02c0ac460673c976ef824145dc38629542136ac33b187ee4e79b58ae7dc419b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:49:39 compute-0 podman[82622]: 2026-01-23 09:49:38.994744106 +0000 UTC m=+0.025504610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:39 compute-0 podman[82622]: 2026-01-23 09:49:39.148331937 +0000 UTC m=+0.179092431 container create ba38de35226506cb699780f729ec895e86e90cec52f99c13abc1fc038212a39b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 09:49:39 compute-0 ceph-mon[74335]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 23 09:49:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 23 09:49:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:49:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:39 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Jan 23 09:49:39 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Jan 23 09:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74e45de9281b8b1321d7bd4f43611748e33c5862c905273bf511f9bc0c699ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74e45de9281b8b1321d7bd4f43611748e33c5862c905273bf511f9bc0c699ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74e45de9281b8b1321d7bd4f43611748e33c5862c905273bf511f9bc0c699ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74e45de9281b8b1321d7bd4f43611748e33c5862c905273bf511f9bc0c699ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74e45de9281b8b1321d7bd4f43611748e33c5862c905273bf511f9bc0c699ee/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:39 compute-0 podman[82622]: 2026-01-23 09:49:39.700140517 +0000 UTC m=+0.730901011 container init ba38de35226506cb699780f729ec895e86e90cec52f99c13abc1fc038212a39b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:49:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:39 compute-0 podman[82622]: 2026-01-23 09:49:39.706770639 +0000 UTC m=+0.737531133 container start ba38de35226506cb699780f729ec895e86e90cec52f99c13abc1fc038212a39b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 09:49:39 compute-0 ceph-osd[82641]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 09:49:39 compute-0 ceph-osd[82641]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Jan 23 09:49:39 compute-0 ceph-osd[82641]: pidfile_write: ignore empty --pid-file
Jan 23 09:49:39 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:39 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:39 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:39 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:39 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:39 compute-0 bash[82622]: ba38de35226506cb699780f729ec895e86e90cec52f99c13abc1fc038212a39b
Jan 23 09:49:39 compute-0 systemd[1]: Started Ceph osd.1 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:49:39 compute-0 sudo[82062]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509bc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509bc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509bc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509bc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509bc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:49:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 23 09:49:40 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:49:40 compute-0 ceph-mon[74335]: Deploying daemon osd.0 on compute-1
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a509b800 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:40 compute-0 ceph-osd[82641]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 23 09:49:40 compute-0 ceph-osd[82641]: load: jerasure load: lrc 
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:40 compute-0 sudo[82671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:49:40 compute-0 sudo[82671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:40 compute-0 sudo[82671]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:40 compute-0 sudo[82696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 09:49:40 compute-0 sudo[82696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 09:49:40 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:41 compute-0 ceph-osd[82641]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 23 09:49:41 compute-0 ceph-osd[82641]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:41 compute-0 podman[82766]: 2026-01-23 09:49:41.290900764 +0000 UTC m=+0.083404678 container create a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f36c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f37000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f37000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f37000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f37000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount shared_bdev_used = 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: RocksDB version: 7.9.2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Git sha 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: DB SUMMARY
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: DB Session ID:  T9JR9VG5GUWJYGD4HN4Y
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: CURRENT file:  CURRENT
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                         Options.error_if_exists: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.create_if_missing: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                                     Options.env: 0x55c0a5f07dc0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                                Options.info_log: 0x55c0a5f0b7a0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                              Options.statistics: (nil)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.use_fsync: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                              Options.db_log_dir: 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.write_buffer_manager: 0x55c0a6002a00
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.unordered_write: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.row_cache: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                              Options.wal_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.two_write_queues: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.wal_compression: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.atomic_flush: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.max_background_jobs: 4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.max_background_compactions: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.max_subcompactions: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.max_open_files: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Compression algorithms supported:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kZSTD supported: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kXpressCompression supported: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kBZip2Compression supported: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kLZ4Compression supported: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kZlibCompression supported: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kSnappyCompression supported: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-mon[74335]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 podman[82766]: 2026-01-23 09:49:41.234851347 +0000 UTC m=+0.027355281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a51309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a51309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a51309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2b73f5b4-fb02-4cd0-b679-01c96d2c39cc
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161781338498, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161781338747, "job": 1, "event": "recovery_finished"}
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: freelist init
Jan 23 09:49:41 compute-0 ceph-osd[82641]: freelist _read_cfg
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs umount
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f37000 /var/lib/ceph/osd/ceph-1/block) close
Jan 23 09:49:41 compute-0 systemd[1]: Started libpod-conmon-a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f.scope.
Jan 23 09:49:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f37000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f37000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f37000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bdev(0x55c0a5f37000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluefs mount shared_bdev_used = 4718592
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: RocksDB version: 7.9.2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Git sha 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: DB SUMMARY
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: DB Session ID:  T9JR9VG5GUWJYGD4HN4Z
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: CURRENT file:  CURRENT
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                         Options.error_if_exists: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.create_if_missing: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                                     Options.env: 0x55c0a60ae310
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                                Options.info_log: 0x55c0a5f0b920
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                              Options.statistics: (nil)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.use_fsync: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                              Options.db_log_dir: 
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.write_buffer_manager: 0x55c0a6002a00
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.unordered_write: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.row_cache: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                              Options.wal_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.two_write_queues: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.wal_compression: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.atomic_flush: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.max_background_jobs: 4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.max_background_compactions: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.max_subcompactions: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.max_open_files: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Compression algorithms supported:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kZSTD supported: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kXpressCompression supported: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kBZip2Compression supported: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kLZ4Compression supported: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kZlibCompression supported: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kLZ4HCCompression supported: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         kSnappyCompression supported: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0b680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a5131350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a51309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a51309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:           Options.merge_operator: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a5f0bac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a51309b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.compression: LZ4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.num_levels: 7
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.bloom_locality: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                               Options.ttl: 2592000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                       Options.enable_blob_files: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                           Options.min_blob_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2b73f5b4-fb02-4cd0-b679-01c96d2c39cc
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161781598060, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 09:49:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161781815699, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161781, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b73f5b4-fb02-4cd0-b679-01c96d2c39cc", "db_session_id": "T9JR9VG5GUWJYGD4HN4Z", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:49:41 compute-0 podman[82766]: 2026-01-23 09:49:41.815868028 +0000 UTC m=+0.608371952 container init a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_saha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161781818925, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161781, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b73f5b4-fb02-4cd0-b679-01c96d2c39cc", "db_session_id": "T9JR9VG5GUWJYGD4HN4Z", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161781822546, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161781, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b73f5b4-fb02-4cd0-b679-01c96d2c39cc", "db_session_id": "T9JR9VG5GUWJYGD4HN4Z", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161781824921, "job": 1, "event": "recovery_finished"}
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 23 09:49:41 compute-0 podman[82766]: 2026-01-23 09:49:41.825299377 +0000 UTC m=+0.617803291 container start a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_saha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Jan 23 09:49:41 compute-0 systemd[1]: libpod-a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f.scope: Deactivated successfully.
Jan 23 09:49:41 compute-0 cranky_saha[82982]: 167 167
Jan 23 09:49:41 compute-0 conmon[82982]: conmon a5686dd815e1d6754556 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f.scope/container/memory.events
Jan 23 09:49:41 compute-0 podman[82766]: 2026-01-23 09:49:41.836949936 +0000 UTC m=+0.629453970 container attach a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_saha, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:49:41 compute-0 podman[82766]: 2026-01-23 09:49:41.838514919 +0000 UTC m=+0.631018853 container died a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c0a6112000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: DB pointer 0x55c0a60bc000
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 23 09:49:41 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 23 09:49:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b1a98a515deac7f53e67d3fb29b8f542fed28a291d64884e4852bc789b05d43-merged.mount: Deactivated successfully.
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 09:49:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 09:49:41 compute-0 ceph-osd[82641]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 23 09:49:41 compute-0 ceph-osd[82641]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 23 09:49:41 compute-0 ceph-osd[82641]: _get_class not permitted to load lua
Jan 23 09:49:41 compute-0 ceph-osd[82641]: _get_class not permitted to load sdk
Jan 23 09:49:41 compute-0 ceph-osd[82641]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 23 09:49:41 compute-0 ceph-osd[82641]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 23 09:49:41 compute-0 ceph-osd[82641]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 23 09:49:41 compute-0 ceph-osd[82641]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 23 09:49:41 compute-0 ceph-osd[82641]: osd.1 0 load_pgs
Jan 23 09:49:41 compute-0 ceph-osd[82641]: osd.1 0 load_pgs opened 0 pgs
Jan 23 09:49:41 compute-0 ceph-osd[82641]: osd.1 0 log_to_monitors true
Jan 23 09:49:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1[82637]: 2026-01-23T09:49:41.891+0000 7fe20e318740 -1 osd.1 0 log_to_monitors true
Jan 23 09:49:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 23 09:49:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 23 09:49:41 compute-0 podman[82766]: 2026-01-23 09:49:41.908866598 +0000 UTC m=+0.701370512 container remove a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_saha, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:49:41 compute-0 systemd[1]: libpod-conmon-a5686dd815e1d67545569a77e72441807f94316c9b555a8d0b03304030f0c18f.scope: Deactivated successfully.
Jan 23 09:49:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:42 compute-0 podman[83223]: 2026-01-23 09:49:42.079526127 +0000 UTC m=+0.053218360 container create de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 09:49:42 compute-0 systemd[1]: Started libpod-conmon-de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538.scope.
Jan 23 09:49:42 compute-0 podman[83223]: 2026-01-23 09:49:42.053915075 +0000 UTC m=+0.027607328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b9f59fe9cf3081ced0112417e2f64e6e3cef1491ebd2c7246e21560cce2040/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b9f59fe9cf3081ced0112417e2f64e6e3cef1491ebd2c7246e21560cce2040/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b9f59fe9cf3081ced0112417e2f64e6e3cef1491ebd2c7246e21560cce2040/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b9f59fe9cf3081ced0112417e2f64e6e3cef1491ebd2c7246e21560cce2040/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:42 compute-0 podman[83223]: 2026-01-23 09:49:42.264980891 +0000 UTC m=+0.238673144 container init de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_rhodes, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 09:49:42 compute-0 podman[83223]: 2026-01-23 09:49:42.271742557 +0000 UTC m=+0.245434790 container start de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:49:42 compute-0 podman[83223]: 2026-01-23 09:49:42.275595923 +0000 UTC m=+0.249288156 container attach de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_rhodes, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 09:49:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 23 09:49:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:49:42 compute-0 ceph-mon[74335]: from='osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 23 09:49:42 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 23 09:49:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 23 09:49:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 23 09:49:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 23 09:49:42 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 23 09:49:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Jan 23 09:49:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:42 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:42 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:42 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:42 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:42 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 23 09:49:42 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 23 09:49:42 compute-0 lvm[83312]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:49:42 compute-0 lvm[83312]: VG ceph_vg0 finished
Jan 23 09:49:42 compute-0 elastic_rhodes[83239]: {}
Jan 23 09:49:43 compute-0 systemd[1]: libpod-de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538.scope: Deactivated successfully.
Jan 23 09:49:43 compute-0 podman[83223]: 2026-01-23 09:49:43.021976237 +0000 UTC m=+0.995668470 container died de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:49:43 compute-0 systemd[1]: libpod-de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538.scope: Consumed 1.134s CPU time.
Jan 23 09:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-56b9f59fe9cf3081ced0112417e2f64e6e3cef1491ebd2c7246e21560cce2040-merged.mount: Deactivated successfully.
Jan 23 09:49:43 compute-0 podman[83223]: 2026-01-23 09:49:43.260450016 +0000 UTC m=+1.234142249 container remove de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:49:43 compute-0 systemd[1]: libpod-conmon-de9df739e1c41e4d73e593cac33fe6f767f99cdc8405df383816ba0623033538.scope: Deactivated successfully.
Jan 23 09:49:43 compute-0 sudo[82696]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:49:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:49:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 23 09:49:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:49:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 09:49:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 23 09:49:43 compute-0 ceph-osd[82641]: osd.1 0 done with init, starting boot process
Jan 23 09:49:43 compute-0 ceph-osd[82641]: osd.1 0 start_boot
Jan 23 09:49:43 compute-0 ceph-osd[82641]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 23 09:49:43 compute-0 ceph-osd[82641]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 23 09:49:43 compute-0 ceph-osd[82641]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 23 09:49:43 compute-0 ceph-osd[82641]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 23 09:49:43 compute-0 ceph-osd[82641]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 23 09:49:43 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 23 09:49:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:43 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:43 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:43 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:43 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:43 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:43 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:43 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:43 compute-0 ceph-mon[74335]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:43 compute-0 ceph-mon[74335]: from='osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 23 09:49:43 compute-0 ceph-mon[74335]: osdmap e6: 2 total, 0 up, 2 in
Jan 23 09:49:43 compute-0 ceph-mon[74335]: from='osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 23 09:49:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:43 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:44 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:44 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:44 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:44 compute-0 ceph-mon[74335]: from='osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 09:49:44 compute-0 ceph-mon[74335]: osdmap e7: 2 total, 0 up, 2 in
Jan 23 09:49:44 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:44 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:44 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:44 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:49:45 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:45 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:46 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:46 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:49:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:46 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:46 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:46 compute-0 ceph-mon[74335]: purged_snaps scrub starts
Jan 23 09:49:46 compute-0 ceph-mon[74335]: purged_snaps scrub ok
Jan 23 09:49:46 compute-0 ceph-mon[74335]: pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:47 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:47 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:48 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:48 compute-0 ceph-mon[74335]: pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:48 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:48 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:48 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:48 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:48 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:48 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:49 compute-0 ceph-mon[74335]: pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:49 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:49 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:49 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:50 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:50 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:50 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:51 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:51 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:51 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v51: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 23 09:49:51 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 23 09:49:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 23 09:49:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:49:51 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 23 09:49:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Jan 23 09:49:51 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Jan 23 09:49:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Jan 23 09:49:51 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 23 09:49:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Jan 23 09:49:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:51 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:51 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:51 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:51 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:51 compute-0 ceph-mon[74335]: pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:51 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:51 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:51 compute-0 ceph-mon[74335]: from='osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:52 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:52 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:49:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Jan 23 09:49:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:52 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:52 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:52 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:52 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:52 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:52 compute-0 ceph-mon[74335]: pgmap v51: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:52 compute-0 ceph-mon[74335]: from='osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 23 09:49:52 compute-0 ceph-mon[74335]: osdmap e8: 2 total, 0 up, 2 in
Jan 23 09:49:52 compute-0 ceph-mon[74335]: from='osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mon[74335]: from='osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 23 09:49:52 compute-0 ceph-mon[74335]: osdmap e9: 2 total, 0 up, 2 in
Jan 23 09:49:52 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:49:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:49:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:53 compute-0 sudo[83326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:49:53 compute-0 sudo[83326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:53 compute-0 sudo[83326]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:53 compute-0 sudo[83351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:49:53 compute-0 sudo[83351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:53 compute-0 sudo[83351]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:53 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:53 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:53 compute-0 sudo[83376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 09:49:53 compute-0 sudo[83376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:49:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:53 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:49:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:53 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:49:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:54 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:54 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:54 compute-0 podman[83470]: 2026-01-23 09:49:54.878246701 +0000 UTC m=+0.290415144 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:49:54 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:49:54 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:55 compute-0 podman[83470]: 2026-01-23 09:49:55.137830096 +0000 UTC m=+0.549998549 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: purged_snaps scrub starts
Jan 23 09:49:55 compute-0 ceph-mon[74335]: purged_snaps scrub ok
Jan 23 09:49:55 compute-0 ceph-mon[74335]: pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:55 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:55 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:55 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:55 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:55 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:55 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:55 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:55 compute-0 sudo[83376]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:55 compute-0 sudo[83552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:49:55 compute-0 sudo[83552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:55 compute-0 sudo[83552]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:55 compute-0 sudo[83577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 09:49:55 compute-0 sudo[83577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:55 compute-0 ceph-osd[82641]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.536 iops: 5001.198 elapsed_sec: 0.600
Jan 23 09:49:55 compute-0 ceph-osd[82641]: log_channel(cluster) log [WRN] : OSD bench result of 5001.198029 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 09:49:55 compute-0 ceph-osd[82641]: osd.1 0 waiting for initial osdmap
Jan 23 09:49:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1[82637]: 2026-01-23T09:49:55.668+0000 7fe20a29b640 -1 osd.1 0 waiting for initial osdmap
Jan 23 09:49:55 compute-0 ceph-osd[82641]: osd.1 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 23 09:49:55 compute-0 ceph-osd[82641]: osd.1 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 23 09:49:55 compute-0 ceph-osd[82641]: osd.1 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 23 09:49:55 compute-0 ceph-osd[82641]: osd.1 9 check_osdmap_features require_osd_release unknown -> squid
Jan 23 09:49:55 compute-0 ceph-osd[82641]: osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 09:49:55 compute-0 ceph-osd[82641]: osd.1 9 set_numa_affinity not setting numa affinity
Jan 23 09:49:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-osd-1[82637]: 2026-01-23T09:49:55.692+0000 7fe2058c3640 -1 osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 09:49:55 compute-0 ceph-osd[82641]: osd.1 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 23 09:49:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:55 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:55 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:55 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:56 compute-0 sudo[83577]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:56 compute-0 sudo[83633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:49:56 compute-0 sudo[83633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:56 compute-0 sudo[83633]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:56 compute-0 sudo[83658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- inventory --format=json-pretty --filter-for-batch
Jan 23 09:49:56 compute-0 sudo[83658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:49:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:56 compute-0 ceph-mon[74335]: OSD bench result of 5001.198029 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 09:49:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:56 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2992979105; not ready for session (expect reconnect)
Jan 23 09:49:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:56 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 09:49:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 23 09:49:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:49:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Jan 23 09:49:56 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105] boot
Jan 23 09:49:56 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Jan 23 09:49:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:49:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:56 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:56 compute-0 ceph-osd[82641]: osd.1 10 state: booting -> active
Jan 23 09:49:56 compute-0 sudo[83706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhknbzzzpwiszjqybwnwqedccymnzspn ; /usr/bin/python3'
Jan 23 09:49:56 compute-0 sudo[83706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:49:56 compute-0 ceph-mgr[74633]: [devicehealth INFO root] creating mgr pool
Jan 23 09:49:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 23 09:49:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 23 09:49:56 compute-0 python3[83709]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:49:56 compute-0 podman[83750]: 2026-01-23 09:49:56.932574901 +0000 UTC m=+0.041471616 container create cebb03f07596252cfa88d423718f690818a47c43d13de272c24e75d4eb6e2ced (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 09:49:56 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:49:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:56 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:56 compute-0 podman[83762]: 2026-01-23 09:49:56.969548307 +0000 UTC m=+0.057546766 container create 101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b (image=quay.io/ceph/ceph:v19, name=recursing_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:49:56 compute-0 systemd[1]: Started libpod-conmon-cebb03f07596252cfa88d423718f690818a47c43d13de272c24e75d4eb6e2ced.scope.
Jan 23 09:49:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:57 compute-0 systemd[1]: Started libpod-conmon-101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b.scope.
Jan 23 09:49:57 compute-0 podman[83750]: 2026-01-23 09:49:56.913845613 +0000 UTC m=+0.022742348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:57 compute-0 podman[83750]: 2026-01-23 09:49:57.015309658 +0000 UTC m=+0.124206393 container init cebb03f07596252cfa88d423718f690818a47c43d13de272c24e75d4eb6e2ced (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd742d6d619c219081c9b7735c72cbdaa45f382febd405666345595d01afb09d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd742d6d619c219081c9b7735c72cbdaa45f382febd405666345595d01afb09d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd742d6d619c219081c9b7735c72cbdaa45f382febd405666345595d01afb09d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:57 compute-0 podman[83750]: 2026-01-23 09:49:57.024564075 +0000 UTC m=+0.133460790 container start cebb03f07596252cfa88d423718f690818a47c43d13de272c24e75d4eb6e2ced (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_robinson, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 09:49:57 compute-0 podman[83750]: 2026-01-23 09:49:57.028871931 +0000 UTC m=+0.137768656 container attach cebb03f07596252cfa88d423718f690818a47c43d13de272c24e75d4eb6e2ced (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 09:49:57 compute-0 reverent_robinson[83781]: 167 167
Jan 23 09:49:57 compute-0 systemd[1]: libpod-cebb03f07596252cfa88d423718f690818a47c43d13de272c24e75d4eb6e2ced.scope: Deactivated successfully.
Jan 23 09:49:57 compute-0 podman[83762]: 2026-01-23 09:49:57.03420088 +0000 UTC m=+0.122199359 container init 101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b (image=quay.io/ceph/ceph:v19, name=recursing_jones, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:49:57 compute-0 podman[83750]: 2026-01-23 09:49:57.034871425 +0000 UTC m=+0.143768140 container died cebb03f07596252cfa88d423718f690818a47c43d13de272c24e75d4eb6e2ced (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_robinson, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:49:57 compute-0 podman[83762]: 2026-01-23 09:49:56.945426048 +0000 UTC m=+0.033424547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:49:57 compute-0 podman[83762]: 2026-01-23 09:49:57.041593005 +0000 UTC m=+0.129591464 container start 101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b (image=quay.io/ceph/ceph:v19, name=recursing_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 09:49:57 compute-0 podman[83762]: 2026-01-23 09:49:57.046378772 +0000 UTC m=+0.134377611 container attach 101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b (image=quay.io/ceph/ceph:v19, name=recursing_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 09:49:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-72577a99e8acd5ff1fa62b494e72e1ab03ca6f5257ed85be937f6227d28c7de7-merged.mount: Deactivated successfully.
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:49:57 compute-0 podman[83750]: 2026-01-23 09:49:57.151805436 +0000 UTC m=+0.260702151 container remove cebb03f07596252cfa88d423718f690818a47c43d13de272c24e75d4eb6e2ced (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_robinson, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:49:57 compute-0 systemd[1]: libpod-conmon-cebb03f07596252cfa88d423718f690818a47c43d13de272c24e75d4eb6e2ced.scope: Deactivated successfully.
Jan 23 09:49:57 compute-0 ceph-mon[74335]: pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 09:49:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:57 compute-0 ceph-mon[74335]: osd.1 [v2:192.168.122.100:6802/2992979105,v1:192.168.122.100:6803/2992979105] boot
Jan 23 09:49:57 compute-0 ceph-mon[74335]: osdmap e10: 2 total, 1 up, 2 in
Jan 23 09:49:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:49:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 23 09:49:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:57 compute-0 podman[83823]: 2026-01-23 09:49:57.476277259 +0000 UTC m=+0.108988344 container create 471f710cff77f612a43900ef489296d51f149d6c9f9f743cb8a975b6369a030b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pascal, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:49:57 compute-0 systemd[1]: Started libpod-conmon-471f710cff77f612a43900ef489296d51f149d6c9f9f743cb8a975b6369a030b.scope.
Jan 23 09:49:57 compute-0 podman[83823]: 2026-01-23 09:49:57.452022688 +0000 UTC m=+0.084733793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 09:49:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 23 09:49:57 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:57 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 23 09:49:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 23 09:49:57 compute-0 ceph-osd[82641]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 23 09:49:57 compute-0 ceph-osd[82641]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 23 09:49:57 compute-0 ceph-osd[82641]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 23 09:49:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c020dc95018195d24393d1971346531755e6ae3efd332df1fd6f6859311b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c020dc95018195d24393d1971346531755e6ae3efd332df1fd6f6859311b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c020dc95018195d24393d1971346531755e6ae3efd332df1fd6f6859311b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c020dc95018195d24393d1971346531755e6ae3efd332df1fd6f6859311b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:57 compute-0 podman[83823]: 2026-01-23 09:49:57.579083724 +0000 UTC m=+0.211794819 container init 471f710cff77f612a43900ef489296d51f149d6c9f9f743cb8a975b6369a030b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pascal, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 09:49:57 compute-0 podman[83823]: 2026-01-23 09:49:57.587883111 +0000 UTC m=+0.220594206 container start 471f710cff77f612a43900ef489296d51f149d6c9f9f743cb8a975b6369a030b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pascal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 09:49:57 compute-0 podman[83823]: 2026-01-23 09:49:57.592186207 +0000 UTC m=+0.224897312 container attach 471f710cff77f612a43900ef489296d51f149d6c9f9f743cb8a975b6369a030b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pascal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:49:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 23 09:49:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001988432' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:49:57 compute-0 recursing_jones[83788]: 
Jan 23 09:49:57 compute-0 recursing_jones[83788]: {"fsid":"f3005f84-239a-55b6-a948-8f1fb592b920","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":135,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":1,"osd_up_since":1769161796,"num_in_osds":2,"osd_in_since":1769161767,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-23T09:47:38:565964+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-23T09:49:10.551708+0000","services":{}},"progress_events":{}}
Jan 23 09:49:57 compute-0 systemd[1]: libpod-101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b.scope: Deactivated successfully.
Jan 23 09:49:57 compute-0 conmon[83788]: conmon 101f1b1dad233d13a9a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b.scope/container/memory.events
Jan 23 09:49:57 compute-0 podman[83762]: 2026-01-23 09:49:57.808378563 +0000 UTC m=+0.896377042 container died 101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b (image=quay.io/ceph/ceph:v19, name=recursing_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:49:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd742d6d619c219081c9b7735c72cbdaa45f382febd405666345595d01afb09d-merged.mount: Deactivated successfully.
Jan 23 09:49:57 compute-0 podman[83762]: 2026-01-23 09:49:57.850851191 +0000 UTC m=+0.938849650 container remove 101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b (image=quay.io/ceph/ceph:v19, name=recursing_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 09:49:57 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:49:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:57 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:58 compute-0 sudo[83706]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:58 compute-0 systemd[1]: libpod-conmon-101f1b1dad233d13a9a9eb817b944e221c44c3cb09325ad3fc03fe13df69918b.scope: Deactivated successfully.
Jan 23 09:49:58 compute-0 sudo[84300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spygfpyzkfkpifbkfvdmmmhuyppvgwqx ; /usr/bin/python3'
Jan 23 09:49:58 compute-0 sudo[84300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]: [
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:     {
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         "available": false,
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         "being_replaced": false,
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         "ceph_device_lvm": false,
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         "lsm_data": {},
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         "lvs": [],
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         "path": "/dev/sr0",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         "rejected_reasons": [
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "Has a FileSystem",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "Insufficient space (<5GB)"
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         ],
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         "sys_api": {
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "actuators": null,
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "device_nodes": [
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:                 "sr0"
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             ],
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "devname": "sr0",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "human_readable_size": "482.00 KB",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "id_bus": "ata",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "model": "QEMU DVD-ROM",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "nr_requests": "2",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "parent": "/dev/sr0",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "partitions": {},
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "path": "/dev/sr0",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "removable": "1",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "rev": "2.5+",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "ro": "0",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "rotational": "1",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "sas_address": "",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "sas_device_handle": "",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "scheduler_mode": "mq-deadline",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "sectors": 0,
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "sectorsize": "2048",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "size": 493568.0,
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "support_discard": "2048",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "type": "disk",
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:             "vendor": "QEMU"
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:         }
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]:     }
Jan 23 09:49:58 compute-0 mystifying_pascal[83849]: ]
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e12 e12: 2 total, 1 up, 2 in
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 1 up, 2 in
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:58 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:58 compute-0 systemd[1]: libpod-471f710cff77f612a43900ef489296d51f149d6c9f9f743cb8a975b6369a030b.scope: Deactivated successfully.
Jan 23 09:49:58 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 23 09:49:58 compute-0 ceph-mon[74335]: osdmap e11: 2 total, 1 up, 2 in
Jan 23 09:49:58 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:58 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 23 09:49:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3001988432' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:49:58 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:58 compute-0 podman[83823]: 2026-01-23 09:49:58.60513735 +0000 UTC m=+1.237848455 container died 471f710cff77f612a43900ef489296d51f149d6c9f9f743cb8a975b6369a030b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pascal, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:49:58 compute-0 python3[84419]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d98c020dc95018195d24393d1971346531755e6ae3efd332df1fd6f6859311b3-merged.mount: Deactivated successfully.
Jan 23 09:49:58 compute-0 podman[83823]: 2026-01-23 09:49:58.654300128 +0000 UTC m=+1.287011213 container remove 471f710cff77f612a43900ef489296d51f149d6c9f9f743cb8a975b6369a030b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 09:49:58 compute-0 systemd[1]: libpod-conmon-471f710cff77f612a43900ef489296d51f149d6c9f9f743cb8a975b6369a030b.scope: Deactivated successfully.
Jan 23 09:49:58 compute-0 podman[84848]: 2026-01-23 09:49:58.697449951 +0000 UTC m=+0.061630497 container create a950afdbdeb8f08b98345c149bba037db94b7a630bf18ac8020eea9c328ef250 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 09:49:58 compute-0 sudo[83658]: pam_unix(sudo:session): session closed for user root
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:49:58 compute-0 systemd[1]: Started libpod-conmon-a950afdbdeb8f08b98345c149bba037db94b7a630bf18ac8020eea9c328ef250.scope.
Jan 23 09:49:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eacdd474736a5f835d0239a0748499124e0d3f1150c17124f8ecd91a1497f25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eacdd474736a5f835d0239a0748499124e0d3f1150c17124f8ecd91a1497f25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:49:58 compute-0 podman[84848]: 2026-01-23 09:49:58.678081069 +0000 UTC m=+0.042261635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:58 compute-0 podman[84848]: 2026-01-23 09:49:58.789166589 +0000 UTC m=+0.153347165 container init a950afdbdeb8f08b98345c149bba037db94b7a630bf18ac8020eea9c328ef250 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:58 compute-0 podman[84848]: 2026-01-23 09:49:58.801870472 +0000 UTC m=+0.166051018 container start a950afdbdeb8f08b98345c149bba037db94b7a630bf18ac8020eea9c328ef250 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:49:58 compute-0 podman[84848]: 2026-01-23 09:49:58.808012269 +0000 UTC m=+0.172192835 container attach a950afdbdeb8f08b98345c149bba037db94b7a630bf18ac8020eea9c328ef250 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:49:58 compute-0 ceph-mgr[74633]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 23 09:49:58 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 23 09:49:58 compute-0 ceph-mgr[74633]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 09:49:58 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 09:49:58 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:58 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:58 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 09:49:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204192632' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:49:59 compute-0 ceph-mon[74335]: pgmap v58: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 09:49:59 compute-0 ceph-mon[74335]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:49:59 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 23 09:49:59 compute-0 ceph-mon[74335]: osdmap e12: 2 total, 1 up, 2 in
Jan 23 09:49:59 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:59 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:59 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:59 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:59 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:49:59 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:49:59 compute-0 ceph-mon[74335]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 23 09:49:59 compute-0 ceph-mon[74335]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 09:49:59 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1204192632' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:49:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 09:49:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 23 09:49:59 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:49:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:49:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:49:59 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:49:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204192632' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e13 e13: 2 total, 1 up, 2 in
Jan 23 09:50:00 compute-0 gallant_heyrovsky[84873]: pool 'vms' created
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 1 up, 2 in
Jan 23 09:50:00 compute-0 systemd[1]: libpod-a950afdbdeb8f08b98345c149bba037db94b7a630bf18ac8020eea9c328ef250.scope: Deactivated successfully.
Jan 23 09:50:00 compute-0 podman[84848]: 2026-01-23 09:50:00.069619883 +0000 UTC m=+1.433800429 container died a950afdbdeb8f08b98345c149bba037db94b7a630bf18ac8020eea9c328ef250 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:00 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eacdd474736a5f835d0239a0748499124e0d3f1150c17124f8ecd91a1497f25-merged.mount: Deactivated successfully.
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:50:00 compute-0 ceph-mgr[74633]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 23 09:50:00 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 23 09:50:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 podman[84848]: 2026-01-23 09:50:00.374330986 +0000 UTC m=+1.738511532 container remove a950afdbdeb8f08b98345c149bba037db94b7a630bf18ac8020eea9c328ef250 (image=quay.io/ceph/ceph:v19, name=gallant_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:50:00 compute-0 sudo[84300]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:00 compute-0 systemd[1]: libpod-conmon-a950afdbdeb8f08b98345c149bba037db94b7a630bf18ac8020eea9c328ef250.scope: Deactivated successfully.
Jan 23 09:50:00 compute-0 sudo[84935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avphjphqnhkphwvwszkffjecfwlywrps ; /usr/bin/python3'
Jan 23 09:50:00 compute-0 sudo[84935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:50:00 compute-0 python3[84937]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:00 compute-0 ceph-mon[74335]: pgmap v60: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 09:50:00 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1204192632' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:00 compute-0 ceph-mon[74335]: osdmap e13: 2 total, 1 up, 2 in
Jan 23 09:50:00 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:00 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:50:00 compute-0 ceph-mon[74335]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 23 09:50:00 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:00 compute-0 podman[84938]: 2026-01-23 09:50:00.784654866 +0000 UTC m=+0.059316125 container create ddbc7349b6fe194395e127809ce438528668e79fc38d751d57e423c2c08cb76e (image=quay.io/ceph/ceph:v19, name=kind_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Jan 23 09:50:00 compute-0 systemd[1]: Started libpod-conmon-ddbc7349b6fe194395e127809ce438528668e79fc38d751d57e423c2c08cb76e.scope.
Jan 23 09:50:00 compute-0 podman[84938]: 2026-01-23 09:50:00.757580292 +0000 UTC m=+0.032241561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e9831ae959852fcdcaf186e9825e70d9ebd2c4392e74d98c5cb7943d8ac9fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e9831ae959852fcdcaf186e9825e70d9ebd2c4392e74d98c5cb7943d8ac9fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:00 compute-0 podman[84938]: 2026-01-23 09:50:00.903681263 +0000 UTC m=+0.178342542 container init ddbc7349b6fe194395e127809ce438528668e79fc38d751d57e423c2c08cb76e (image=quay.io/ceph/ceph:v19, name=kind_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:50:00 compute-0 podman[84938]: 2026-01-23 09:50:00.910865183 +0000 UTC m=+0.185526432 container start ddbc7349b6fe194395e127809ce438528668e79fc38d751d57e423c2c08cb76e (image=quay.io/ceph/ceph:v19, name=kind_panini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 09:50:00 compute-0 podman[84938]: 2026-01-23 09:50:00.926616265 +0000 UTC m=+0.201277534 container attach ddbc7349b6fe194395e127809ce438528668e79fc38d751d57e423c2c08cb76e (image=quay.io/ceph/ceph:v19, name=kind_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 09:50:00 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:50:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:00 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:50:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 09:50:01 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3874459659' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v62: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 09:50:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 23 09:50:01 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:50:01 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3874459659' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e14 e14: 2 total, 1 up, 2 in
Jan 23 09:50:01 compute-0 kind_panini[84954]: pool 'volumes' created
Jan 23 09:50:01 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 1 up, 2 in
Jan 23 09:50:01 compute-0 systemd[1]: libpod-ddbc7349b6fe194395e127809ce438528668e79fc38d751d57e423c2c08cb76e.scope: Deactivated successfully.
Jan 23 09:50:01 compute-0 podman[84938]: 2026-01-23 09:50:01.775213699 +0000 UTC m=+1.049874978 container died ddbc7349b6fe194395e127809ce438528668e79fc38d751d57e423c2c08cb76e (image=quay.io/ceph/ceph:v19, name=kind_panini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 09:50:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:50:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:01 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:50:01 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:01 compute-0 ceph-mon[74335]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:50:01 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3874459659' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6e9831ae959852fcdcaf186e9825e70d9ebd2c4392e74d98c5cb7943d8ac9fa-merged.mount: Deactivated successfully.
Jan 23 09:50:01 compute-0 podman[84938]: 2026-01-23 09:50:01.905564459 +0000 UTC m=+1.180225718 container remove ddbc7349b6fe194395e127809ce438528668e79fc38d751d57e423c2c08cb76e (image=quay.io/ceph/ceph:v19, name=kind_panini, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 09:50:01 compute-0 systemd[1]: libpod-conmon-ddbc7349b6fe194395e127809ce438528668e79fc38d751d57e423c2c08cb76e.scope: Deactivated successfully.
Jan 23 09:50:01 compute-0 sudo[84935]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:01 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:50:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:50:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:01 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:50:02 compute-0 sudo[85019]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opvjacnhtsxtdmjtyinjyhajcntqpnoh ; /usr/bin/python3'
Jan 23 09:50:02 compute-0 sudo[85019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:02 compute-0 python3[85021]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:02 compute-0 podman[85022]: 2026-01-23 09:50:02.281554163 +0000 UTC m=+0.052007802 container create cf334d3c31294ed299ffe0cafd9b942cb98cc53154d7912f9e8e8edcd5a38c27 (image=quay.io/ceph/ceph:v19, name=awesome_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 09:50:02 compute-0 systemd[1]: Started libpod-conmon-cf334d3c31294ed299ffe0cafd9b942cb98cc53154d7912f9e8e8edcd5a38c27.scope.
Jan 23 09:50:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a1e772ebf22e4ed80740a8b93b4f05ab6172853743f2ddf5e38edc5b322f64/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a1e772ebf22e4ed80740a8b93b4f05ab6172853743f2ddf5e38edc5b322f64/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:02 compute-0 podman[85022]: 2026-01-23 09:50:02.351617957 +0000 UTC m=+0.122071616 container init cf334d3c31294ed299ffe0cafd9b942cb98cc53154d7912f9e8e8edcd5a38c27 (image=quay.io/ceph/ceph:v19, name=awesome_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 23 09:50:02 compute-0 podman[85022]: 2026-01-23 09:50:02.256390991 +0000 UTC m=+0.026844650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:02 compute-0 podman[85022]: 2026-01-23 09:50:02.360213419 +0000 UTC m=+0.130667058 container start cf334d3c31294ed299ffe0cafd9b942cb98cc53154d7912f9e8e8edcd5a38c27 (image=quay.io/ceph/ceph:v19, name=awesome_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 09:50:02 compute-0 podman[85022]: 2026-01-23 09:50:02.364701649 +0000 UTC m=+0.135155288 container attach cf334d3c31294ed299ffe0cafd9b942cb98cc53154d7912f9e8e8edcd5a38c27 (image=quay.io/ceph/ceph:v19, name=awesome_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 09:50:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 09:50:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/930228594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 23 09:50:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/930228594' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e15 e15: 2 total, 1 up, 2 in
Jan 23 09:50:02 compute-0 awesome_kilby[85038]: pool 'backups' created
Jan 23 09:50:02 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 1 up, 2 in
Jan 23 09:50:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:50:02 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:02 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:50:02 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:02 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:02 compute-0 systemd[1]: libpod-cf334d3c31294ed299ffe0cafd9b942cb98cc53154d7912f9e8e8edcd5a38c27.scope: Deactivated successfully.
Jan 23 09:50:02 compute-0 podman[85022]: 2026-01-23 09:50:02.782901945 +0000 UTC m=+0.553355594 container died cf334d3c31294ed299ffe0cafd9b942cb98cc53154d7912f9e8e8edcd5a38c27 (image=quay.io/ceph/ceph:v19, name=awesome_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7a1e772ebf22e4ed80740a8b93b4f05ab6172853743f2ddf5e38edc5b322f64-merged.mount: Deactivated successfully.
Jan 23 09:50:02 compute-0 podman[85022]: 2026-01-23 09:50:02.822806346 +0000 UTC m=+0.593259985 container remove cf334d3c31294ed299ffe0cafd9b942cb98cc53154d7912f9e8e8edcd5a38c27 (image=quay.io/ceph/ceph:v19, name=awesome_kilby, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:50:02 compute-0 systemd[1]: libpod-conmon-cf334d3c31294ed299ffe0cafd9b942cb98cc53154d7912f9e8e8edcd5a38c27.scope: Deactivated successfully.
Jan 23 09:50:02 compute-0 ceph-mon[74335]: pgmap v62: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 09:50:02 compute-0 ceph-mon[74335]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:50:02 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3874459659' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:02 compute-0 ceph-mon[74335]: osdmap e14: 2 total, 1 up, 2 in
Jan 23 09:50:02 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:02 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:02 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/930228594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:02 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/930228594' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:02 compute-0 ceph-mon[74335]: osdmap e15: 2 total, 1 up, 2 in
Jan 23 09:50:02 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:02 compute-0 sudo[85019]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:02 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:50:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:50:02 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:02 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:50:03 compute-0 sudo[85100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lclwgaossxiokrdjwqzouvrfhgawwtbh ; /usr/bin/python3'
Jan 23 09:50:03 compute-0 sudo[85100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:03 compute-0 python3[85102]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:03 compute-0 podman[85103]: 2026-01-23 09:50:03.289200267 +0000 UTC m=+0.072734265 container create 2f901904ffe83326e076593026fab850cf402025a1d621c390b8bc8f9d64288c (image=quay.io/ceph/ceph:v19, name=cool_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:50:03 compute-0 systemd[1]: Started libpod-conmon-2f901904ffe83326e076593026fab850cf402025a1d621c390b8bc8f9d64288c.scope.
Jan 23 09:50:03 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895d19c6a8b7493d5343b366a74bbe54d6b5676a1d68d625352e867d5820a61c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895d19c6a8b7493d5343b366a74bbe54d6b5676a1d68d625352e867d5820a61c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:03 compute-0 podman[85103]: 2026-01-23 09:50:03.269922617 +0000 UTC m=+0.053456645 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:03 compute-0 podman[85103]: 2026-01-23 09:50:03.365178353 +0000 UTC m=+0.148712351 container init 2f901904ffe83326e076593026fab850cf402025a1d621c390b8bc8f9d64288c (image=quay.io/ceph/ceph:v19, name=cool_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:50:03 compute-0 podman[85103]: 2026-01-23 09:50:03.371271269 +0000 UTC m=+0.154805267 container start 2f901904ffe83326e076593026fab850cf402025a1d621c390b8bc8f9d64288c (image=quay.io/ceph/ceph:v19, name=cool_curran, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:03 compute-0 podman[85103]: 2026-01-23 09:50:03.376531686 +0000 UTC m=+0.160065704 container attach 2f901904ffe83326e076593026fab850cf402025a1d621c390b8bc8f9d64288c (image=quay.io/ceph/ceph:v19, name=cool_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 09:50:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v65: 4 pgs: 1 creating+peering, 3 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 09:50:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 09:50:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/259791107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 23 09:50:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/259791107' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e16 e16: 2 total, 1 up, 2 in
Jan 23 09:50:03 compute-0 cool_curran[85119]: pool 'images' created
Jan 23 09:50:03 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 1 up, 2 in
Jan 23 09:50:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:50:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:03 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:50:03 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:03 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:03 compute-0 systemd[1]: libpod-2f901904ffe83326e076593026fab850cf402025a1d621c390b8bc8f9d64288c.scope: Deactivated successfully.
Jan 23 09:50:03 compute-0 podman[85103]: 2026-01-23 09:50:03.785691521 +0000 UTC m=+0.569225529 container died 2f901904ffe83326e076593026fab850cf402025a1d621c390b8bc8f9d64288c (image=quay.io/ceph/ceph:v19, name=cool_curran, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 23 09:50:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-895d19c6a8b7493d5343b366a74bbe54d6b5676a1d68d625352e867d5820a61c-merged.mount: Deactivated successfully.
Jan 23 09:50:03 compute-0 podman[85103]: 2026-01-23 09:50:03.819847563 +0000 UTC m=+0.603381561 container remove 2f901904ffe83326e076593026fab850cf402025a1d621c390b8bc8f9d64288c (image=quay.io/ceph/ceph:v19, name=cool_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:50:03 compute-0 systemd[1]: libpod-conmon-2f901904ffe83326e076593026fab850cf402025a1d621c390b8bc8f9d64288c.scope: Deactivated successfully.
Jan 23 09:50:03 compute-0 sudo[85100]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:03 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/259791107' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/259791107' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:03 compute-0 ceph-mon[74335]: osdmap e16: 2 total, 1 up, 2 in
Jan 23 09:50:03 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:03 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/21403438; not ready for session (expect reconnect)
Jan 23 09:50:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:50:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:03 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 09:50:03 compute-0 sudo[85181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdquexiowhdxxkjtgnncvoyiegcnlfpv ; /usr/bin/python3'
Jan 23 09:50:03 compute-0 sudo[85181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:04 compute-0 python3[85183]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:04 compute-0 podman[85184]: 2026-01-23 09:50:04.214140316 +0000 UTC m=+0.048825791 container create ba11f40f93b429ffca82658d4b529f7c9a747d61984d44cabeb8f9d8c689468c (image=quay.io/ceph/ceph:v19, name=great_poitras, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 09:50:04 compute-0 systemd[1]: Started libpod-conmon-ba11f40f93b429ffca82658d4b529f7c9a747d61984d44cabeb8f9d8c689468c.scope.
Jan 23 09:50:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b2628620109b3537974c02ec6e575f86f07d30905f3330b311deca769f7697/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b2628620109b3537974c02ec6e575f86f07d30905f3330b311deca769f7697/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:04 compute-0 podman[85184]: 2026-01-23 09:50:04.277849788 +0000 UTC m=+0.112535273 container init ba11f40f93b429ffca82658d4b529f7c9a747d61984d44cabeb8f9d8c689468c (image=quay.io/ceph/ceph:v19, name=great_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:50:04 compute-0 podman[85184]: 2026-01-23 09:50:04.283947664 +0000 UTC m=+0.118633139 container start ba11f40f93b429ffca82658d4b529f7c9a747d61984d44cabeb8f9d8c689468c (image=quay.io/ceph/ceph:v19, name=great_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:50:04 compute-0 podman[85184]: 2026-01-23 09:50:04.19328125 +0000 UTC m=+0.027966745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:04 compute-0 podman[85184]: 2026-01-23 09:50:04.288038025 +0000 UTC m=+0.122723500 container attach ba11f40f93b429ffca82658d4b529f7c9a747d61984d44cabeb8f9d8c689468c (image=quay.io/ceph/ceph:v19, name=great_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 23 09:50:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 09:50:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3993334949' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 23 09:50:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3993334949' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 23 09:50:04 compute-0 great_poitras[85199]: pool 'cephfs.cephfs.meta' created
Jan 23 09:50:04 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438] boot
Jan 23 09:50:04 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 23 09:50:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:50:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:04 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:04 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:04 compute-0 systemd[1]: libpod-ba11f40f93b429ffca82658d4b529f7c9a747d61984d44cabeb8f9d8c689468c.scope: Deactivated successfully.
Jan 23 09:50:04 compute-0 podman[85184]: 2026-01-23 09:50:04.793930099 +0000 UTC m=+0.628615574 container died ba11f40f93b429ffca82658d4b529f7c9a747d61984d44cabeb8f9d8c689468c (image=quay.io/ceph/ceph:v19, name=great_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-26b2628620109b3537974c02ec6e575f86f07d30905f3330b311deca769f7697-merged.mount: Deactivated successfully.
Jan 23 09:50:04 compute-0 podman[85184]: 2026-01-23 09:50:04.83115708 +0000 UTC m=+0.665842555 container remove ba11f40f93b429ffca82658d4b529f7c9a747d61984d44cabeb8f9d8c689468c (image=quay.io/ceph/ceph:v19, name=great_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:04 compute-0 systemd[1]: libpod-conmon-ba11f40f93b429ffca82658d4b529f7c9a747d61984d44cabeb8f9d8c689468c.scope: Deactivated successfully.
Jan 23 09:50:04 compute-0 sudo[85181]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:04 compute-0 ceph-mon[74335]: pgmap v65: 4 pgs: 1 creating+peering, 3 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 23 09:50:04 compute-0 ceph-mon[74335]: OSD bench result of 3429.482546 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 09:50:04 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3993334949' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3993334949' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:04 compute-0 ceph-mon[74335]: osd.0 [v2:192.168.122.101:6800/21403438,v1:192.168.122.101:6801/21403438] boot
Jan 23 09:50:04 compute-0 ceph-mon[74335]: osdmap e17: 2 total, 2 up, 2 in
Jan 23 09:50:04 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:50:04 compute-0 sudo[85262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umiuptrxrodcbupzmoojcnrpjkzizbzz ; /usr/bin/python3'
Jan 23 09:50:04 compute-0 sudo[85262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:05 compute-0 python3[85264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:05 compute-0 podman[85265]: 2026-01-23 09:50:05.201007367 +0000 UTC m=+0.048339360 container create 1932b2572a72360d06989c0ca42208cad76485308c6b25aa26483feeb7b8d844 (image=quay.io/ceph/ceph:v19, name=pedantic_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 09:50:05 compute-0 systemd[1]: Started libpod-conmon-1932b2572a72360d06989c0ca42208cad76485308c6b25aa26483feeb7b8d844.scope.
Jan 23 09:50:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a6d8315f31740aa2a5674615c9f922157bf6593c2275ad8109e03d55a4bad3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a6d8315f31740aa2a5674615c9f922157bf6593c2275ad8109e03d55a4bad3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:05 compute-0 podman[85265]: 2026-01-23 09:50:05.181858299 +0000 UTC m=+0.029190322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:05 compute-0 podman[85265]: 2026-01-23 09:50:05.285937593 +0000 UTC m=+0.133269606 container init 1932b2572a72360d06989c0ca42208cad76485308c6b25aa26483feeb7b8d844 (image=quay.io/ceph/ceph:v19, name=pedantic_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 09:50:05 compute-0 podman[85265]: 2026-01-23 09:50:05.29119468 +0000 UTC m=+0.138526673 container start 1932b2572a72360d06989c0ca42208cad76485308c6b25aa26483feeb7b8d844 (image=quay.io/ceph/ceph:v19, name=pedantic_mendel, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:50:05 compute-0 podman[85265]: 2026-01-23 09:50:05.295241141 +0000 UTC m=+0.142573164 container attach 1932b2572a72360d06989c0ca42208cad76485308c6b25aa26483feeb7b8d844 (image=quay.io/ceph/ceph:v19, name=pedantic_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:50:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 23 09:50:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3079864635' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v68: 6 pgs: 3 creating+peering, 3 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 23 09:50:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3079864635' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 23 09:50:05 compute-0 pedantic_mendel[85281]: pool 'cephfs.cephfs.data' created
Jan 23 09:50:05 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 23 09:50:05 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:05 compute-0 systemd[1]: libpod-1932b2572a72360d06989c0ca42208cad76485308c6b25aa26483feeb7b8d844.scope: Deactivated successfully.
Jan 23 09:50:05 compute-0 podman[85265]: 2026-01-23 09:50:05.799430546 +0000 UTC m=+0.646762559 container died 1932b2572a72360d06989c0ca42208cad76485308c6b25aa26483feeb7b8d844 (image=quay.io/ceph/ceph:v19, name=pedantic_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 23 09:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-30a6d8315f31740aa2a5674615c9f922157bf6593c2275ad8109e03d55a4bad3-merged.mount: Deactivated successfully.
Jan 23 09:50:05 compute-0 podman[85265]: 2026-01-23 09:50:05.836456103 +0000 UTC m=+0.683788106 container remove 1932b2572a72360d06989c0ca42208cad76485308c6b25aa26483feeb7b8d844 (image=quay.io/ceph/ceph:v19, name=pedantic_mendel, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 09:50:05 compute-0 systemd[1]: libpod-conmon-1932b2572a72360d06989c0ca42208cad76485308c6b25aa26483feeb7b8d844.scope: Deactivated successfully.
Jan 23 09:50:05 compute-0 sudo[85262]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3079864635' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 09:50:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3079864635' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 09:50:05 compute-0 ceph-mon[74335]: osdmap e18: 2 total, 2 up, 2 in
Jan 23 09:50:06 compute-0 sudo[85343]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqfjaoefpmzfxjrmwwrrbjrblxusjjjm ; /usr/bin/python3'
Jan 23 09:50:06 compute-0 sudo[85343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:06 compute-0 python3[85345]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:06 compute-0 podman[85346]: 2026-01-23 09:50:06.24217039 +0000 UTC m=+0.047891130 container create f501dfa8b8b30ea5131a69bd78c7a1ebf502b93711f2fe2c415873cee9063f32 (image=quay.io/ceph/ceph:v19, name=keen_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:06 compute-0 systemd[1]: Started libpod-conmon-f501dfa8b8b30ea5131a69bd78c7a1ebf502b93711f2fe2c415873cee9063f32.scope.
Jan 23 09:50:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee8c8831093f659a438d0968a7f841602ed24705bdf0c18ae1aaeff1e2b0d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee8c8831093f659a438d0968a7f841602ed24705bdf0c18ae1aaeff1e2b0d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:06 compute-0 podman[85346]: 2026-01-23 09:50:06.31293545 +0000 UTC m=+0.118656190 container init f501dfa8b8b30ea5131a69bd78c7a1ebf502b93711f2fe2c415873cee9063f32 (image=quay.io/ceph/ceph:v19, name=keen_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:06 compute-0 podman[85346]: 2026-01-23 09:50:06.319071317 +0000 UTC m=+0.124792057 container start f501dfa8b8b30ea5131a69bd78c7a1ebf502b93711f2fe2c415873cee9063f32 (image=quay.io/ceph/ceph:v19, name=keen_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:50:06 compute-0 podman[85346]: 2026-01-23 09:50:06.226152873 +0000 UTC m=+0.031873633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:06 compute-0 podman[85346]: 2026-01-23 09:50:06.322728739 +0000 UTC m=+0.128449479 container attach f501dfa8b8b30ea5131a69bd78c7a1ebf502b93711f2fe2c415873cee9063f32 (image=quay.io/ceph/ceph:v19, name=keen_moser, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:50:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 23 09:50:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1653895368' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 23 09:50:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 23 09:50:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1653895368' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 23 09:50:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Jan 23 09:50:06 compute-0 keen_moser[85362]: enabled application 'rbd' on pool 'vms'
Jan 23 09:50:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Jan 23 09:50:06 compute-0 systemd[1]: libpod-f501dfa8b8b30ea5131a69bd78c7a1ebf502b93711f2fe2c415873cee9063f32.scope: Deactivated successfully.
Jan 23 09:50:06 compute-0 podman[85346]: 2026-01-23 09:50:06.801577848 +0000 UTC m=+0.607298588 container died f501dfa8b8b30ea5131a69bd78c7a1ebf502b93711f2fe2c415873cee9063f32 (image=quay.io/ceph/ceph:v19, name=keen_moser, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 09:50:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-32ee8c8831093f659a438d0968a7f841602ed24705bdf0c18ae1aaeff1e2b0d1-merged.mount: Deactivated successfully.
Jan 23 09:50:06 compute-0 podman[85346]: 2026-01-23 09:50:06.840347203 +0000 UTC m=+0.646067943 container remove f501dfa8b8b30ea5131a69bd78c7a1ebf502b93711f2fe2c415873cee9063f32 (image=quay.io/ceph/ceph:v19, name=keen_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:50:06 compute-0 systemd[1]: libpod-conmon-f501dfa8b8b30ea5131a69bd78c7a1ebf502b93711f2fe2c415873cee9063f32.scope: Deactivated successfully.
Jan 23 09:50:06 compute-0 sudo[85343]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:06 compute-0 ceph-mon[74335]: pgmap v68: 6 pgs: 3 creating+peering, 3 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:06 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1653895368' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 23 09:50:06 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1653895368' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 23 09:50:06 compute-0 ceph-mon[74335]: osdmap e19: 2 total, 2 up, 2 in
Jan 23 09:50:06 compute-0 sudo[85423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbrachzewqdfdipusfiwkjeknjmyxzkr ; /usr/bin/python3'
Jan 23 09:50:06 compute-0 sudo[85423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:07 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:50:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:07 compute-0 python3[85425]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:07 compute-0 podman[85426]: 2026-01-23 09:50:07.187999944 +0000 UTC m=+0.042550651 container create c603154a4386ac1f6ce50cfa90d27d142fcccaf937e5d6cc0d79dc9877398ed0 (image=quay.io/ceph/ceph:v19, name=friendly_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 09:50:07 compute-0 systemd[1]: Started libpod-conmon-c603154a4386ac1f6ce50cfa90d27d142fcccaf937e5d6cc0d79dc9877398ed0.scope.
Jan 23 09:50:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b68f643dc2074ba55f2958f417ec2034cdb36112a9fb5e9f875b96326c005bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b68f643dc2074ba55f2958f417ec2034cdb36112a9fb5e9f875b96326c005bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:07 compute-0 podman[85426]: 2026-01-23 09:50:07.262279932 +0000 UTC m=+0.116830639 container init c603154a4386ac1f6ce50cfa90d27d142fcccaf937e5d6cc0d79dc9877398ed0 (image=quay.io/ceph/ceph:v19, name=friendly_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 09:50:07 compute-0 podman[85426]: 2026-01-23 09:50:07.167240631 +0000 UTC m=+0.021791358 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:07 compute-0 podman[85426]: 2026-01-23 09:50:07.268613834 +0000 UTC m=+0.123164541 container start c603154a4386ac1f6ce50cfa90d27d142fcccaf937e5d6cc0d79dc9877398ed0 (image=quay.io/ceph/ceph:v19, name=friendly_mcnulty, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 09:50:07 compute-0 podman[85426]: 2026-01-23 09:50:07.312625086 +0000 UTC m=+0.167175893 container attach c603154a4386ac1f6ce50cfa90d27d142fcccaf937e5d6cc0d79dc9877398ed0 (image=quay.io/ceph/ceph:v19, name=friendly_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:50:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 23 09:50:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/870404708' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 23 09:50:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 4 active+clean, 2 creating+peering, 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 23 09:50:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/870404708' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 23 09:50:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Jan 23 09:50:07 compute-0 friendly_mcnulty[85440]: enabled application 'rbd' on pool 'volumes'
Jan 23 09:50:07 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 23 09:50:07 compute-0 systemd[1]: libpod-c603154a4386ac1f6ce50cfa90d27d142fcccaf937e5d6cc0d79dc9877398ed0.scope: Deactivated successfully.
Jan 23 09:50:07 compute-0 podman[85465]: 2026-01-23 09:50:07.848855207 +0000 UTC m=+0.027826182 container died c603154a4386ac1f6ce50cfa90d27d142fcccaf937e5d6cc0d79dc9877398ed0 (image=quay.io/ceph/ceph:v19, name=friendly_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 09:50:07 compute-0 ceph-mon[74335]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:50:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/870404708' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 23 09:50:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/870404708' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 23 09:50:07 compute-0 ceph-mon[74335]: osdmap e20: 2 total, 2 up, 2 in
Jan 23 09:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b68f643dc2074ba55f2958f417ec2034cdb36112a9fb5e9f875b96326c005bd-merged.mount: Deactivated successfully.
Jan 23 09:50:07 compute-0 podman[85465]: 2026-01-23 09:50:07.976486897 +0000 UTC m=+0.155457872 container remove c603154a4386ac1f6ce50cfa90d27d142fcccaf937e5d6cc0d79dc9877398ed0 (image=quay.io/ceph/ceph:v19, name=friendly_mcnulty, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:07 compute-0 systemd[1]: libpod-conmon-c603154a4386ac1f6ce50cfa90d27d142fcccaf937e5d6cc0d79dc9877398ed0.scope: Deactivated successfully.
Jan 23 09:50:08 compute-0 sudo[85423]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:08 compute-0 sudo[85503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjwjpuqgkhghhwbgxigwilpxzlrjiglc ; /usr/bin/python3'
Jan 23 09:50:08 compute-0 sudo[85503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:08 compute-0 python3[85505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:08 compute-0 podman[85506]: 2026-01-23 09:50:08.346219651 +0000 UTC m=+0.024987859 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:08 compute-0 podman[85506]: 2026-01-23 09:50:08.458486107 +0000 UTC m=+0.137254295 container create 729b2e3790549bdb4a7b34cd9a1ca5f3081c7168b22816ea06d963d4d2dededd (image=quay.io/ceph/ceph:v19, name=stupefied_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 09:50:08 compute-0 systemd[1]: Started libpod-conmon-729b2e3790549bdb4a7b34cd9a1ca5f3081c7168b22816ea06d963d4d2dededd.scope.
Jan 23 09:50:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893c42c126cd5c599b9041be86446e7b75968ad805f13baebfef77efa5f8197f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893c42c126cd5c599b9041be86446e7b75968ad805f13baebfef77efa5f8197f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:50:08
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [balancer INFO root] Some PGs (0.142857) are unknown; try again later
Jan 23 09:50:08 compute-0 podman[85506]: 2026-01-23 09:50:08.540542149 +0000 UTC m=+0.219310337 container init 729b2e3790549bdb4a7b34cd9a1ca5f3081c7168b22816ea06d963d4d2dededd (image=quay.io/ceph/ceph:v19, name=stupefied_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:50:08 compute-0 podman[85506]: 2026-01-23 09:50:08.545567581 +0000 UTC m=+0.224335759 container start 729b2e3790549bdb4a7b34cd9a1ca5f3081c7168b22816ea06d963d4d2dededd (image=quay.io/ceph/ceph:v19, name=stupefied_pasteur, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 09:50:08 compute-0 podman[85506]: 2026-01-23 09:50:08.550799658 +0000 UTC m=+0.229567876 container attach 729b2e3790549bdb4a7b34cd9a1ca5f3081c7168b22816ea06d963d4d2dededd (image=quay.io/ceph/ceph:v19, name=stupefied_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [devicehealth INFO root] creating main.db for devicehealth
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 23 09:50:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 09:50:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:50:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 23 09:50:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1144026165' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 23 09:50:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 23 09:50:08 compute-0 ceph-mon[74335]: pgmap v71: 7 pgs: 4 active+clean, 2 creating+peering, 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:08 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1144026165' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 23 09:50:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1144026165' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 23 09:50:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Jan 23 09:50:08 compute-0 stupefied_pasteur[85521]: enabled application 'rbd' on pool 'backups'
Jan 23 09:50:08 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev ed090298-d94c-48c4-aa33-bd40d612b85c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 23 09:50:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:50:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:08 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Check health
Jan 23 09:50:08 compute-0 systemd[1]: libpod-729b2e3790549bdb4a7b34cd9a1ca5f3081c7168b22816ea06d963d4d2dededd.scope: Deactivated successfully.
Jan 23 09:50:08 compute-0 podman[85506]: 2026-01-23 09:50:08.996462378 +0000 UTC m=+0.675230596 container died 729b2e3790549bdb4a7b34cd9a1ca5f3081c7168b22816ea06d963d4d2dededd (image=quay.io/ceph/ceph:v19, name=stupefied_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:50:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 23 09:50:09 compute-0 sudo[85564]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 23 09:50:09 compute-0 sudo[85564]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 23 09:50:09 compute-0 sudo[85564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 23 09:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-893c42c126cd5c599b9041be86446e7b75968ad805f13baebfef77efa5f8197f-merged.mount: Deactivated successfully.
Jan 23 09:50:09 compute-0 sudo[85564]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 23 09:50:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 09:50:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:50:09 compute-0 podman[85506]: 2026-01-23 09:50:09.073417475 +0000 UTC m=+0.752185663 container remove 729b2e3790549bdb4a7b34cd9a1ca5f3081c7168b22816ea06d963d4d2dededd (image=quay.io/ceph/ceph:v19, name=stupefied_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:50:09 compute-0 systemd[1]: libpod-conmon-729b2e3790549bdb4a7b34cd9a1ca5f3081c7168b22816ea06d963d4d2dededd.scope: Deactivated successfully.
Jan 23 09:50:09 compute-0 sudo[85503]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:09 compute-0 sudo[85598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwygnhvsfekcofareovyfphoamwxwflm ; /usr/bin/python3'
Jan 23 09:50:09 compute-0 sudo[85598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:09 compute-0 python3[85600]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:09 compute-0 podman[85601]: 2026-01-23 09:50:09.421224829 +0000 UTC m=+0.026573014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:09 compute-0 podman[85601]: 2026-01-23 09:50:09.591315697 +0000 UTC m=+0.196663862 container create feeeed7f68f188f4730964724ef72334bee575e92b7b9dc4aa1486cb3c33dbef (image=quay.io/ceph/ceph:v19, name=adoring_bartik, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:50:09 compute-0 systemd[1]: Started libpod-conmon-feeeed7f68f188f4730964724ef72334bee575e92b7b9dc4aa1486cb3c33dbef.scope.
Jan 23 09:50:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8c2e080eed11807a0c2a8ffc4023c0b417c18b8584991f8a04c46eba04c21b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8c2e080eed11807a0c2a8ffc4023c0b417c18b8584991f8a04c46eba04c21b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:09 compute-0 podman[85601]: 2026-01-23 09:50:09.680819165 +0000 UTC m=+0.286167350 container init feeeed7f68f188f4730964724ef72334bee575e92b7b9dc4aa1486cb3c33dbef (image=quay.io/ceph/ceph:v19, name=adoring_bartik, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:09 compute-0 podman[85601]: 2026-01-23 09:50:09.687480073 +0000 UTC m=+0.292828238 container start feeeed7f68f188f4730964724ef72334bee575e92b7b9dc4aa1486cb3c33dbef (image=quay.io/ceph/ceph:v19, name=adoring_bartik, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:50:09 compute-0 podman[85601]: 2026-01-23 09:50:09.691413291 +0000 UTC m=+0.296761476 container attach feeeed7f68f188f4730964724ef72334bee575e92b7b9dc4aa1486cb3c33dbef (image=quay.io/ceph/ceph:v19, name=adoring_bartik, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 4 active+clean, 2 creating+peering, 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 23 09:50:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Jan 23 09:50:09 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Jan 23 09:50:09 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 898cdd0b-0ae5-44d8-8340-453a3e1878c4 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 23 09:50:09 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1144026165' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 23 09:50:09 compute-0 ceph-mon[74335]: osdmap e21: 2 total, 2 up, 2 in
Jan 23 09:50:09 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:09 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 23 09:50:09 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 23 09:50:09 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:50:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:50:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 23 09:50:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1803776421' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 23 09:50:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 23 09:50:10 compute-0 ceph-mon[74335]: pgmap v74: 7 pgs: 4 active+clean, 2 creating+peering, 1 unknown; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:10 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:10 compute-0 ceph-mon[74335]: osdmap e22: 2 total, 2 up, 2 in
Jan 23 09:50:10 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1803776421' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 23 09:50:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1803776421' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 23 09:50:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Jan 23 09:50:11 compute-0 adoring_bartik[85616]: enabled application 'rbd' on pool 'images'
Jan 23 09:50:11 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Jan 23 09:50:11 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:11 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 8b423136-8b0d-4684-802d-a17d11d2e7d9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 23 09:50:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:50:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:11 compute-0 systemd[1]: libpod-feeeed7f68f188f4730964724ef72334bee575e92b7b9dc4aa1486cb3c33dbef.scope: Deactivated successfully.
Jan 23 09:50:11 compute-0 podman[85601]: 2026-01-23 09:50:11.028894848 +0000 UTC m=+1.634243013 container died feeeed7f68f188f4730964724ef72334bee575e92b7b9dc4aa1486cb3c33dbef (image=quay.io/ceph/ceph:v19, name=adoring_bartik, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 09:50:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf8c2e080eed11807a0c2a8ffc4023c0b417c18b8584991f8a04c46eba04c21b-merged.mount: Deactivated successfully.
Jan 23 09:50:11 compute-0 podman[85601]: 2026-01-23 09:50:11.308918499 +0000 UTC m=+1.914266654 container remove feeeed7f68f188f4730964724ef72334bee575e92b7b9dc4aa1486cb3c33dbef (image=quay.io/ceph/ceph:v19, name=adoring_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:11 compute-0 sudo[85598]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:11 compute-0 systemd[1]: libpod-conmon-feeeed7f68f188f4730964724ef72334bee575e92b7b9dc4aa1486cb3c33dbef.scope: Deactivated successfully.
Jan 23 09:50:11 compute-0 sudo[85675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-levcmlbuwrknjdodawruumpvibykgcme ; /usr/bin/python3'
Jan 23 09:50:11 compute-0 sudo[85675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:11 compute-0 python3[85677]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:11 compute-0 podman[85678]: 2026-01-23 09:50:11.70851462 +0000 UTC m=+0.044626518 container create 01c70c5b5a930b5967977349c53e87ef9bb0ab05295906029ab389e2f61c2f2a (image=quay.io/ceph/ceph:v19, name=blissful_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:50:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:50:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:50:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:11 compute-0 systemd[1]: Started libpod-conmon-01c70c5b5a930b5967977349c53e87ef9bb0ab05295906029ab389e2f61c2f2a.scope.
Jan 23 09:50:11 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b55fa64d21e5ce947f4dea1c3396145b1bd31b8f62b39a90632dbb435bbaa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b55fa64d21e5ce947f4dea1c3396145b1bd31b8f62b39a90632dbb435bbaa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:11 compute-0 podman[85678]: 2026-01-23 09:50:11.690447376 +0000 UTC m=+0.026559294 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:11 compute-0 podman[85678]: 2026-01-23 09:50:11.787709148 +0000 UTC m=+0.123821076 container init 01c70c5b5a930b5967977349c53e87ef9bb0ab05295906029ab389e2f61c2f2a (image=quay.io/ceph/ceph:v19, name=blissful_kepler, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:50:11 compute-0 podman[85678]: 2026-01-23 09:50:11.794293585 +0000 UTC m=+0.130405483 container start 01c70c5b5a930b5967977349c53e87ef9bb0ab05295906029ab389e2f61c2f2a (image=quay.io/ceph/ceph:v19, name=blissful_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:11 compute-0 podman[85678]: 2026-01-23 09:50:11.799371378 +0000 UTC m=+0.135483286 container attach 01c70c5b5a930b5967977349c53e87ef9bb0ab05295906029ab389e2f61c2f2a (image=quay.io/ceph/ceph:v19, name=blissful_kepler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 09:50:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 23 09:50:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Jan 23 09:50:12 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Jan 23 09:50:12 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev d44198b6-5f00-4583-a9d8-cc20c30efecd (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 23 09:50:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:50:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=24 pruub=14.760506630s) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active pruub 44.894447327s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 24 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=24 pruub=15.756515503s) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active pruub 45.890460968s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 24 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=24 pruub=15.756515503s) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown pruub 45.890460968s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=24 pruub=14.760506630s) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown pruub 44.894447327s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:12 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1803776421' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 23 09:50:12 compute-0 ceph-mon[74335]: osdmap e23: 2 total, 2 up, 2 in
Jan 23 09:50:12 compute-0 ceph-mon[74335]: mgrmap e9: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:12 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:12 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:12 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:12 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:12 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:50:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 23 09:50:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2193766018' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 23 09:50:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 23 09:50:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2193766018' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 23 09:50:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Jan 23 09:50:13 compute-0 blissful_kepler[85693]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 23 09:50:13 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Jan 23 09:50:13 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 929d04b2-e4c6-4fcc-b01a-f75ff29cfa40 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 23 09:50:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:50:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1f( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.18( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.19( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.17( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1e( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.10( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.16( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.15( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.11( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.12( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.13( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.14( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.12( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.14( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.13( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.15( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.11( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.16( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.10( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.17( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.8( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.f( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.e( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.9( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.d( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.a( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.c( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.b( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.b( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.c( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.a( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.d( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.7( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.7( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.6( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.2( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.5( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.6( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.2( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.5( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.4( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.3( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.4( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.3( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.8( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.f( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.9( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.e( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1d( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1a( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1c( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1b( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1b( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1c( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1a( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1d( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.19( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1f( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1e( empty local-lis/les=14/15 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.18( empty local-lis/les=15/16 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:13 compute-0 ceph-mon[74335]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:13 compute-0 ceph-mon[74335]: osdmap e24: 2 total, 2 up, 2 in
Jan 23 09:50:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:13 compute-0 ceph-mon[74335]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:50:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2193766018' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 23 09:50:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2193766018' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 23 09:50:13 compute-0 ceph-mon[74335]: osdmap e25: 2 total, 2 up, 2 in
Jan 23 09:50:13 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1f( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.19( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.18( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1e( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.10( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.17( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.16( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.15( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.11( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.12( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.13( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.14( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.12( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.11( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.13( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.15( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.16( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.10( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.17( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.f( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.8( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.e( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.9( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.a( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.c( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.b( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.d( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.b( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.0( empty local-lis/les=24/25 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.c( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.a( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.7( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.d( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.0( empty local-lis/les=24/25 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.7( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.5( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.6( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.5( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.2( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.4( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.3( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.4( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.f( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.9( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.e( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1a( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1b( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1c( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.1a( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1d( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1e( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[3.1f( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=14/14 les/c/f=15/15/0 sis=24) [1] r=0 lpr=24 pi=[14,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 25 pg[4.18( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=15/15 les/c/f=16/16/0 sis=24) [1] r=0 lpr=24 pi=[15,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:13 compute-0 systemd[1]: libpod-01c70c5b5a930b5967977349c53e87ef9bb0ab05295906029ab389e2f61c2f2a.scope: Deactivated successfully.
Jan 23 09:50:13 compute-0 podman[85718]: 2026-01-23 09:50:13.087108576 +0000 UTC m=+0.027136167 container died 01c70c5b5a930b5967977349c53e87ef9bb0ab05295906029ab389e2f61c2f2a (image=quay.io/ceph/ceph:v19, name=blissful_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 09:50:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d9b55fa64d21e5ce947f4dea1c3396145b1bd31b8f62b39a90632dbb435bbaa-merged.mount: Deactivated successfully.
Jan 23 09:50:13 compute-0 podman[85718]: 2026-01-23 09:50:13.128017889 +0000 UTC m=+0.068045450 container remove 01c70c5b5a930b5967977349c53e87ef9bb0ab05295906029ab389e2f61c2f2a (image=quay.io/ceph/ceph:v19, name=blissful_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:13 compute-0 systemd[1]: libpod-conmon-01c70c5b5a930b5967977349c53e87ef9bb0ab05295906029ab389e2f61c2f2a.scope: Deactivated successfully.
Jan 23 09:50:13 compute-0 sudo[85675]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:13 compute-0 sudo[85755]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqlmckgfablbnbidflwwilyqhmtwbfcy ; /usr/bin/python3'
Jan 23 09:50:13 compute-0 sudo[85755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:13 compute-0 python3[85757]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:13 compute-0 podman[85758]: 2026-01-23 09:50:13.504647207 +0000 UTC m=+0.052436262 container create 25cff9c7a71795a9e0eaa69885c143c6b2e3bcf0282c480f06996eaa1dabc2a4 (image=quay.io/ceph/ceph:v19, name=youthful_morse, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 09:50:13 compute-0 systemd[1]: Started libpod-conmon-25cff9c7a71795a9e0eaa69885c143c6b2e3bcf0282c480f06996eaa1dabc2a4.scope.
Jan 23 09:50:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34bd29c38735513eeed4c2d0439660eee94e31d9a01704f84b03858441a4e16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34bd29c38735513eeed4c2d0439660eee94e31d9a01704f84b03858441a4e16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:13 compute-0 podman[85758]: 2026-01-23 09:50:13.47792881 +0000 UTC m=+0.025717885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:13 compute-0 podman[85758]: 2026-01-23 09:50:13.585955562 +0000 UTC m=+0.133744667 container init 25cff9c7a71795a9e0eaa69885c143c6b2e3bcf0282c480f06996eaa1dabc2a4 (image=quay.io/ceph/ceph:v19, name=youthful_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 09:50:13 compute-0 podman[85758]: 2026-01-23 09:50:13.593740806 +0000 UTC m=+0.141529871 container start 25cff9c7a71795a9e0eaa69885c143c6b2e3bcf0282c480f06996eaa1dabc2a4 (image=quay.io/ceph/ceph:v19, name=youthful_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:50:13 compute-0 podman[85758]: 2026-01-23 09:50:13.603432792 +0000 UTC m=+0.151221837 container attach 25cff9c7a71795a9e0eaa69885c143c6b2e3bcf0282c480f06996eaa1dabc2a4 (image=quay.io/ceph/ceph:v19, name=youthful_morse, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:50:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v80: 100 pgs: 2 peering, 93 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:50:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:50:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:13 compute-0 ceph-mgr[74633]: [progress WARNING root] Starting Global Recovery Event,95 pgs not in active + clean state
Jan 23 09:50:13 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 23 09:50:13 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 23 09:50:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 23 09:50:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2528169956' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 23 09:50:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 23 09:50:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2528169956' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 23 09:50:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Jan 23 09:50:14 compute-0 youthful_morse[85773]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 23 09:50:14 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev fa94f636-9fad-4a89-ba43-14434df26eeb (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev ed090298-d94c-48c4-aa33-bd40d612b85c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event ed090298-d94c-48c4-aa33-bd40d612b85c (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 898cdd0b-0ae5-44d8-8340-453a3e1878c4 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 898cdd0b-0ae5-44d8-8340-453a3e1878c4 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 8b423136-8b0d-4684-802d-a17d11d2e7d9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 8b423136-8b0d-4684-802d-a17d11d2e7d9 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev d44198b6-5f00-4583-a9d8-cc20c30efecd (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event d44198b6-5f00-4583-a9d8-cc20c30efecd (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 929d04b2-e4c6-4fcc-b01a-f75ff29cfa40 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 929d04b2-e4c6-4fcc-b01a-f75ff29cfa40 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev fa94f636-9fad-4a89-ba43-14434df26eeb (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 23 09:50:14 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event fa94f636-9fad-4a89-ba43-14434df26eeb (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 23 09:50:14 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:14 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:14 compute-0 ceph-mon[74335]: 4.1f scrub starts
Jan 23 09:50:14 compute-0 ceph-mon[74335]: 4.1f scrub ok
Jan 23 09:50:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2528169956' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 23 09:50:14 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:50:14 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:14 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2528169956' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 23 09:50:14 compute-0 ceph-mon[74335]: osdmap e26: 2 total, 2 up, 2 in
Jan 23 09:50:14 compute-0 systemd[1]: libpod-25cff9c7a71795a9e0eaa69885c143c6b2e3bcf0282c480f06996eaa1dabc2a4.scope: Deactivated successfully.
Jan 23 09:50:14 compute-0 podman[85758]: 2026-01-23 09:50:14.047012974 +0000 UTC m=+0.594802049 container died 25cff9c7a71795a9e0eaa69885c143c6b2e3bcf0282c480f06996eaa1dabc2a4 (image=quay.io/ceph/ceph:v19, name=youthful_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 09:50:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a34bd29c38735513eeed4c2d0439660eee94e31d9a01704f84b03858441a4e16-merged.mount: Deactivated successfully.
Jan 23 09:50:14 compute-0 podman[85758]: 2026-01-23 09:50:14.1109075 +0000 UTC m=+0.658696555 container remove 25cff9c7a71795a9e0eaa69885c143c6b2e3bcf0282c480f06996eaa1dabc2a4 (image=quay.io/ceph/ceph:v19, name=youthful_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 09:50:14 compute-0 systemd[1]: libpod-conmon-25cff9c7a71795a9e0eaa69885c143c6b2e3bcf0282c480f06996eaa1dabc2a4.scope: Deactivated successfully.
Jan 23 09:50:14 compute-0 sudo[85755]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:14 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 23 09:50:14 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 23 09:50:15 compute-0 ceph-mon[74335]: pgmap v80: 100 pgs: 2 peering, 93 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:15 compute-0 ceph-mon[74335]: 2.1e scrub starts
Jan 23 09:50:15 compute-0 ceph-mon[74335]: 2.1e scrub ok
Jan 23 09:50:15 compute-0 ceph-mon[74335]: 3.19 scrub starts
Jan 23 09:50:15 compute-0 ceph-mon[74335]: 3.19 scrub ok
Jan 23 09:50:15 compute-0 python3[85885]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:50:15 compute-0 python3[85956]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769161814.9073184-37379-138006514138450/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:50:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v82: 162 pgs: 2 peering, 124 unknown, 36 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:50:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:15 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 23 09:50:15 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 23 09:50:16 compute-0 sudo[86056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdaqaovnzyzvxsvxlufurdnhkqcywsvl ; /usr/bin/python3'
Jan 23 09:50:16 compute-0 sudo[86056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:16 compute-0 python3[86058]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:50:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 23 09:50:16 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:50:16 compute-0 ceph-mon[74335]: 2.1d scrub starts
Jan 23 09:50:16 compute-0 ceph-mon[74335]: 2.1d scrub ok
Jan 23 09:50:16 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:16 compute-0 ceph-mon[74335]: 4.10 scrub starts
Jan 23 09:50:16 compute-0 ceph-mon[74335]: 4.10 scrub ok
Jan 23 09:50:16 compute-0 sudo[86056]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:16 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Jan 23 09:50:16 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Jan 23 09:50:16 compute-0 sudo[86131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihugtdhowjxkikntrvtwunmiaiyqihsd ; /usr/bin/python3'
Jan 23 09:50:16 compute-0 sudo[86131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:16 compute-0 python3[86133]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769161815.940012-37393-221425687879473/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=154e70425f75c76ca662c815b6e2ea581e0c242f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:50:16 compute-0 sudo[86131]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:16 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.1e deep-scrub starts
Jan 23 09:50:16 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.1e deep-scrub ok
Jan 23 09:50:16 compute-0 sudo[86181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksbdortgvcioffqjhurnmvopcobsksqs ; /usr/bin/python3'
Jan 23 09:50:16 compute-0 sudo[86181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:16 compute-0 python3[86183]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:17 compute-0 podman[86184]: 2026-01-23 09:50:17.036619027 +0000 UTC m=+0.044244539 container create c22d2e937ed10ac09c8c6b8f94aaad0cb440f5f7b07b840694c8a812dce6b985 (image=quay.io/ceph/ceph:v19, name=blissful_chatterjee, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:50:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:17 compute-0 systemd[1]: Started libpod-conmon-c22d2e937ed10ac09c8c6b8f94aaad0cb440f5f7b07b840694c8a812dce6b985.scope.
Jan 23 09:50:17 compute-0 podman[86184]: 2026-01-23 09:50:17.016965908 +0000 UTC m=+0.024591440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c10b05eea87f18584c70e8e5cd6c52bad31237729a99a349bc9c2ece1ca3f65/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c10b05eea87f18584c70e8e5cd6c52bad31237729a99a349bc9c2ece1ca3f65/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c10b05eea87f18584c70e8e5cd6c52bad31237729a99a349bc9c2ece1ca3f65/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:17 compute-0 podman[86184]: 2026-01-23 09:50:17.146890088 +0000 UTC m=+0.154515620 container init c22d2e937ed10ac09c8c6b8f94aaad0cb440f5f7b07b840694c8a812dce6b985 (image=quay.io/ceph/ceph:v19, name=blissful_chatterjee, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 09:50:17 compute-0 podman[86184]: 2026-01-23 09:50:17.152263848 +0000 UTC m=+0.159889360 container start c22d2e937ed10ac09c8c6b8f94aaad0cb440f5f7b07b840694c8a812dce6b985 (image=quay.io/ceph/ceph:v19, name=blissful_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:50:17 compute-0 podman[86184]: 2026-01-23 09:50:17.171214201 +0000 UTC m=+0.178839713 container attach c22d2e937ed10ac09c8c6b8f94aaad0cb440f5f7b07b840694c8a812dce6b985 (image=quay.io/ceph/ceph:v19, name=blissful_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 23 09:50:17 compute-0 ceph-mon[74335]: pgmap v82: 162 pgs: 2 peering, 124 unknown, 36 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:17 compute-0 ceph-mon[74335]: 2.1f scrub starts
Jan 23 09:50:17 compute-0 ceph-mon[74335]: 2.1f scrub ok
Jan 23 09:50:17 compute-0 ceph-mon[74335]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:50:17 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:50:17 compute-0 ceph-mon[74335]: osdmap e27: 2 total, 2 up, 2 in
Jan 23 09:50:17 compute-0 ceph-mon[74335]: 4.1e deep-scrub starts
Jan 23 09:50:17 compute-0 ceph-mon[74335]: 4.1e deep-scrub ok
Jan 23 09:50:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 23 09:50:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Jan 23 09:50:17 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 26 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=26 pruub=12.276928902s) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active pruub 47.901512146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 26 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=26 pruub=11.277157784s) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active pruub 46.901824951s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=26 pruub=11.277157784s) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown pruub 46.901824951s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=26 pruub=12.276928902s) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown pruub 47.901512146s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.b( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.c( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.d( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.e( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.1b( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.1c( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.1d( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.1e( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.f( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.10( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.11( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.12( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.13( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.14( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.1( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.2( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.7( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.8( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.9( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.a( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.15( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.16( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.17( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.18( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.1f( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.19( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.1a( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.3( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.4( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.5( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[5.6( empty local-lis/les=16/17 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.7( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.8( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.9( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.a( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.b( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.c( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.15( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.16( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.d( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.e( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.f( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.10( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.2( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.1( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.11( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.12( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.13( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.14( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.17( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.18( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.4( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.3( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.5( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.6( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.19( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.1a( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.1b( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.1c( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.1d( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.1e( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 28 pg[6.1f( empty local-lis/les=17/18 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 23 09:50:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/29302298' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 09:50:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/29302298' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 09:50:17 compute-0 blissful_chatterjee[86199]: 
Jan 23 09:50:17 compute-0 blissful_chatterjee[86199]: [global]
Jan 23 09:50:17 compute-0 blissful_chatterjee[86199]:         fsid = f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:50:17 compute-0 blissful_chatterjee[86199]:         mon_host = 192.168.122.100
Jan 23 09:50:17 compute-0 systemd[1]: libpod-c22d2e937ed10ac09c8c6b8f94aaad0cb440f5f7b07b840694c8a812dce6b985.scope: Deactivated successfully.
Jan 23 09:50:17 compute-0 podman[86184]: 2026-01-23 09:50:17.563012407 +0000 UTC m=+0.570637929 container died c22d2e937ed10ac09c8c6b8f94aaad0cb440f5f7b07b840694c8a812dce6b985 (image=quay.io/ceph/ceph:v19, name=blissful_chatterjee, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 09:50:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c10b05eea87f18584c70e8e5cd6c52bad31237729a99a349bc9c2ece1ca3f65-merged.mount: Deactivated successfully.
Jan 23 09:50:17 compute-0 podman[86184]: 2026-01-23 09:50:17.603999672 +0000 UTC m=+0.611625184 container remove c22d2e937ed10ac09c8c6b8f94aaad0cb440f5f7b07b840694c8a812dce6b985 (image=quay.io/ceph/ceph:v19, name=blissful_chatterjee, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 09:50:17 compute-0 systemd[1]: libpod-conmon-c22d2e937ed10ac09c8c6b8f94aaad0cb440f5f7b07b840694c8a812dce6b985.scope: Deactivated successfully.
Jan 23 09:50:17 compute-0 sudo[86181]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v85: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:17 compute-0 sudo[86258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqgtevynlxifggppnvhlwqekgoqhkmtp ; /usr/bin/python3'
Jan 23 09:50:17 compute-0 sudo[86258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:17 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 23 09:50:17 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 23 09:50:17 compute-0 python3[86260]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:18 compute-0 podman[86261]: 2026-01-23 09:50:17.967937367 +0000 UTC m=+0.025290005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:18 compute-0 podman[86261]: 2026-01-23 09:50:18.188319927 +0000 UTC m=+0.245672545 container create 9223665e23569f22655e944cb9fe81fd326592f87ff3d29f8e7c33e571c618a3 (image=quay.io/ceph/ceph:v19, name=mystifying_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 09:50:18 compute-0 systemd[1]: Started libpod-conmon-9223665e23569f22655e944cb9fe81fd326592f87ff3d29f8e7c33e571c618a3.scope.
Jan 23 09:50:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b62412e550075c13b7d0c7a618b0a36c020844fc8aa299e8307e5e6395620f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b62412e550075c13b7d0c7a618b0a36c020844fc8aa299e8307e5e6395620f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b62412e550075c13b7d0c7a618b0a36c020844fc8aa299e8307e5e6395620f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 23 09:50:18 compute-0 ceph-mon[74335]: 2.1c scrub starts
Jan 23 09:50:18 compute-0 ceph-mon[74335]: 2.1c scrub ok
Jan 23 09:50:18 compute-0 ceph-mon[74335]: osdmap e28: 2 total, 2 up, 2 in
Jan 23 09:50:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/29302298' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 09:50:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/29302298' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 09:50:18 compute-0 ceph-mon[74335]: 3.18 scrub starts
Jan 23 09:50:18 compute-0 ceph-mon[74335]: 3.18 scrub ok
Jan 23 09:50:18 compute-0 podman[86261]: 2026-01-23 09:50:18.744624826 +0000 UTC m=+0.801977494 container init 9223665e23569f22655e944cb9fe81fd326592f87ff3d29f8e7c33e571c618a3 (image=quay.io/ceph/ceph:v19, name=mystifying_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 09:50:18 compute-0 podman[86261]: 2026-01-23 09:50:18.750165909 +0000 UTC m=+0.807518527 container start 9223665e23569f22655e944cb9fe81fd326592f87ff3d29f8e7c33e571c618a3 (image=quay.io/ceph/ceph:v19, name=mystifying_mayer, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 09:50:18 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 23 09:50:18 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 23 09:50:18 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 8 completed events
Jan 23 09:50:19 compute-0 podman[86261]: 2026-01-23 09:50:19.026882897 +0000 UTC m=+1.084235535 container attach 9223665e23569f22655e944cb9fe81fd326592f87ff3d29f8e7c33e571c618a3 (image=quay.io/ceph/ceph:v19, name=mystifying_mayer, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 09:50:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:50:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Jan 23 09:50:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Jan 23 09:50:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v87: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:19 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 23 09:50:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 23 09:50:20 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:21 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 23 09:50:21 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Jan 23 09:50:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v88: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:22 compute-0 ceph-mon[74335]: pgmap v85: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:22 compute-0 ceph-mon[74335]: 2.7 deep-scrub starts
Jan 23 09:50:22 compute-0 ceph-mon[74335]: 2.7 deep-scrub ok
Jan 23 09:50:22 compute-0 ceph-mon[74335]: 3.17 scrub starts
Jan 23 09:50:22 compute-0 ceph-mon[74335]: 3.17 scrub ok
Jan 23 09:50:22 compute-0 ceph-mon[74335]: 2.9 scrub starts
Jan 23 09:50:22 compute-0 ceph-mon[74335]: 2.9 scrub ok
Jan 23 09:50:22 compute-0 ceph-mon[74335]: osdmap e29: 2 total, 2 up, 2 in
Jan 23 09:50:22 compute-0 ceph-mon[74335]: 3.16 scrub starts
Jan 23 09:50:22 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Jan 23 09:50:22 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 23 09:50:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2695482257' entity='client.admin' 
Jan 23 09:50:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:22 compute-0 mystifying_mayer[86276]: set ssl_option
Jan 23 09:50:22 compute-0 systemd[1]: libpod-9223665e23569f22655e944cb9fe81fd326592f87ff3d29f8e7c33e571c618a3.scope: Deactivated successfully.
Jan 23 09:50:22 compute-0 podman[86261]: 2026-01-23 09:50:22.782531678 +0000 UTC m=+4.839884336 container died 9223665e23569f22655e944cb9fe81fd326592f87ff3d29f8e7c33e571c618a3 (image=quay.io/ceph/ceph:v19, name=mystifying_mayer, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.19( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.1a( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.18( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.1b( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.18( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.1a( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.19( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.1b( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.1e( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.1d( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.f( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.1f( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.c( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.d( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.1c( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.2( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.1( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.5( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.4( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.7( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.7( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.4( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.0( empty local-lis/les=26/29 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.3( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.6( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.0( empty local-lis/les=26/29 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.3( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.1( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.6( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.5( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.c( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.d( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.f( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.e( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.2( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.a( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.b( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.8( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.b( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.8( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.9( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.9( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.a( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.15( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.16( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.17( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.14( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.14( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.16( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.15( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.12( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.11( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.13( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.10( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.13( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.12( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.10( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.11( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.1d( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.1f( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.1e( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.1c( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[6.17( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=17/17 les/c/f=18/18/0 sis=26) [1] r=0 lpr=26 pi=[17,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 29 pg[5.e( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=16/16 les/c/f=17/17/0 sis=26) [1] r=0 lpr=26 pi=[16,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:23 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 23 09:50:23 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 23 09:50:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v89: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:23 compute-0 ceph-mon[74335]: pgmap v87: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 2.8 scrub starts
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 2.8 scrub ok
Jan 23 09:50:23 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 2.6 scrub starts
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 2.6 scrub ok
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 3.16 scrub ok
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 4.11 deep-scrub starts
Jan 23 09:50:23 compute-0 ceph-mon[74335]: pgmap v88: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 2.a scrub starts
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 2.a scrub ok
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 4.11 deep-scrub ok
Jan 23 09:50:23 compute-0 ceph-mon[74335]: 3.15 scrub starts
Jan 23 09:50:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2695482257' entity='client.admin' 
Jan 23 09:50:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6b62412e550075c13b7d0c7a618b0a36c020844fc8aa299e8307e5e6395620f-merged.mount: Deactivated successfully.
Jan 23 09:50:23 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 23 09:50:23 compute-0 systemd[75668]: Starting Mark boot as successful...
Jan 23 09:50:23 compute-0 podman[86261]: 2026-01-23 09:50:23.87648158 +0000 UTC m=+5.933834198 container remove 9223665e23569f22655e944cb9fe81fd326592f87ff3d29f8e7c33e571c618a3 (image=quay.io/ceph/ceph:v19, name=mystifying_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 09:50:23 compute-0 systemd[75668]: Finished Mark boot as successful.
Jan 23 09:50:23 compute-0 sudo[86258]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:23 compute-0 systemd[1]: libpod-conmon-9223665e23569f22655e944cb9fe81fd326592f87ff3d29f8e7c33e571c618a3.scope: Deactivated successfully.
Jan 23 09:50:24 compute-0 sudo[86336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwdrngsswcxwtvqrufgnxbrmqiufjebm ; /usr/bin/python3'
Jan 23 09:50:24 compute-0 sudo[86336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:24 compute-0 python3[86338]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:24 compute-0 podman[86339]: 2026-01-23 09:50:24.272711456 +0000 UTC m=+0.046766565 container create 6b0cce378ff208dd8c0c5920e6615a04189d05585b40d744c531ea95730c77aa (image=quay.io/ceph/ceph:v19, name=affectionate_ishizaka, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Jan 23 09:50:24 compute-0 systemd[1]: Started libpod-conmon-6b0cce378ff208dd8c0c5920e6615a04189d05585b40d744c531ea95730c77aa.scope.
Jan 23 09:50:24 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109165d61b22989230830526ee94e96d497e542d84dc85b8967fd6630677ab15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109165d61b22989230830526ee94e96d497e542d84dc85b8967fd6630677ab15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109165d61b22989230830526ee94e96d497e542d84dc85b8967fd6630677ab15/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:24 compute-0 podman[86339]: 2026-01-23 09:50:24.255131634 +0000 UTC m=+0.029186753 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:24 compute-0 podman[86339]: 2026-01-23 09:50:24.355483744 +0000 UTC m=+0.129538833 container init 6b0cce378ff208dd8c0c5920e6615a04189d05585b40d744c531ea95730c77aa (image=quay.io/ceph/ceph:v19, name=affectionate_ishizaka, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:50:24 compute-0 podman[86339]: 2026-01-23 09:50:24.362677535 +0000 UTC m=+0.136732634 container start 6b0cce378ff208dd8c0c5920e6615a04189d05585b40d744c531ea95730c77aa (image=quay.io/ceph/ceph:v19, name=affectionate_ishizaka, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 09:50:24 compute-0 podman[86339]: 2026-01-23 09:50:24.367265317 +0000 UTC m=+0.141320486 container attach 6b0cce378ff208dd8c0c5920e6615a04189d05585b40d744c531ea95730c77aa (image=quay.io/ceph/ceph:v19, name=affectionate_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:50:24 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14225 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:50:24 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 09:50:24 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 09:50:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 09:50:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:24 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 23 09:50:24 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 23 09:50:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 23 09:50:24 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 23 09:50:24 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 23 09:50:25 compute-0 ceph-mon[74335]: 2.2 scrub starts
Jan 23 09:50:25 compute-0 ceph-mon[74335]: 2.2 scrub ok
Jan 23 09:50:25 compute-0 ceph-mon[74335]: 3.15 scrub ok
Jan 23 09:50:25 compute-0 ceph-mon[74335]: 4.12 scrub starts
Jan 23 09:50:25 compute-0 ceph-mon[74335]: pgmap v89: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:25 compute-0 ceph-mon[74335]: 4.12 scrub ok
Jan 23 09:50:25 compute-0 ceph-mon[74335]: 2.0 deep-scrub starts
Jan 23 09:50:25 compute-0 ceph-mon[74335]: 2.0 deep-scrub ok
Jan 23 09:50:25 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v90: 193 pgs: 1 active, 1 active+clean+scrubbing, 191 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:50:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:50:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:50:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:50:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:50:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:50:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:25 compute-0 affectionate_ishizaka[86355]: Scheduled rgw.rgw update...
Jan 23 09:50:25 compute-0 affectionate_ishizaka[86355]: Scheduled ingress.rgw.default update...
Jan 23 09:50:25 compute-0 systemd[1]: libpod-6b0cce378ff208dd8c0c5920e6615a04189d05585b40d744c531ea95730c77aa.scope: Deactivated successfully.
Jan 23 09:50:25 compute-0 podman[86339]: 2026-01-23 09:50:25.779450693 +0000 UTC m=+1.553505792 container died 6b0cce378ff208dd8c0c5920e6615a04189d05585b40d744c531ea95730c77aa (image=quay.io/ceph/ceph:v19, name=affectionate_ishizaka, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:25 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 23 09:50:25 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 23 09:50:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-109165d61b22989230830526ee94e96d497e542d84dc85b8967fd6630677ab15-merged.mount: Deactivated successfully.
Jan 23 09:50:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 23 09:50:26 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 23 09:50:26 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 23 09:50:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Jan 23 09:50:27 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.1a( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.454274178s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.019165039s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.18( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.592358589s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.157268524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.1a( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.454221725s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.019165039s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.18( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.592308998s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.157268524s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.18( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453958511s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.019275665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.18( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453931808s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.019275665s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1b( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.454290390s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.019950867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1b( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.454267502s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.019950867s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.591413498s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.157253265s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.1a( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.591321945s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.157203674s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.591350555s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.157253265s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.1a( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.591291428s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.157203674s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.19( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453896523s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020011902s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.19( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453872681s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020011902s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.1e( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453877449s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020072937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.1e( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453865051s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020072937s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1c( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453940392s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020286560s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1c( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453929901s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020286560s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1a( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453654289s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020000458s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.590769768s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.157142639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.590746880s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.157142639s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.e( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.590608597s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.157077789s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.e( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.590596199s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.157077789s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1a( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453556061s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020000458s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.590515137s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.157058716s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.590497971s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.157058716s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.f( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453613281s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020195007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.f( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453566551s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020195007s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.1c( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.590517998s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.157188416s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.1c( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.590502739s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.157188416s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.e( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.454139709s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020896912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.e( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.454128265s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020896912s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.d( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453424454s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020267487s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.2( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453327179s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020301819s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.3( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.589871407s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156875610s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.2( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453310966s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020301819s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.3( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.589859962s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156875610s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.d( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453212738s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020267487s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.589592934s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156799316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.4( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453114510s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020328522s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.4( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.453101158s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020328522s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.589577675s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156799316s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.7( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452905655s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020332336s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.7( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452875137s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020332336s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.5( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.589282036s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156768799s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.5( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.589269638s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156768799s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.7( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452811241s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020339966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.7( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452759743s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020339966s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.1b( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.589753151s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.157165527s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452546120s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020416260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452524185s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020416260s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.1b( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.589377403s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.157165527s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.1( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588697433s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156745911s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.1( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588674545s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156745911s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.3( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452326775s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020408630s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.3( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452309608s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020408630s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.5( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452239990s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020450592s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.5( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452219009s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020450592s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.a( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588390350s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156658173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588432312s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156723022s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588419914s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156723022s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.c( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588270187s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156677246s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.a( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588374138s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156658173s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.c( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588256836s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156677246s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.e( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452066422s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020519257s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.e( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452056885s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020519257s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.2( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.452523232s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020519257s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.2( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.451980591s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020519257s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.c( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588109016s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156661987s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.c( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.588057518s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156661987s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.a( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.587789536s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156585693s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.a( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.587772369s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156585693s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.8( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.451690674s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020549774s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.8( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.451676369s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020549774s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.587535858s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156543732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.587491989s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156543732s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.587456703s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156566620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.587442398s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156566620s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.587300301s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156532288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.9( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.451311111s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020584106s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.9( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.451300621s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020584106s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.587244034s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156532288s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.a( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.451098442s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020595551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.a( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.451059341s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020595551s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.16( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.451052666s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020629883s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 podman[86339]: 2026-01-23 09:50:27.454689062 +0000 UTC m=+3.228744161 container remove 6b0cce378ff208dd8c0c5920e6615a04189d05585b40d744c531ea95730c77aa (image=quay.io/ceph/ceph:v19, name=affectionate_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.16( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.450978279s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020629883s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.10( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586785316s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156467438s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.15( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.450799942s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020599365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.10( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586689949s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156467438s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.15( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.450776100s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020599365s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.d( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586538315s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156600952s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586268425s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156356812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.d( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586500168s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156600952s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.f( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586302757s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156494141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586245537s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156356812s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.f( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586275101s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156494141s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.15( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586130142s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156391144s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.15( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.586117744s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156391144s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.17( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.450509071s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020885468s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.13( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.585978508s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156364441s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.17( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.450478554s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020885468s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.13( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.585885048s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156314850s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.13( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.585944176s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156364441s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.15( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.450213432s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020687103s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.13( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.585867882s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156314850s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.15( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.450176239s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020687103s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.14( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.585716248s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156330109s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.585607529s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.156250000s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.14( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.585701942s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156330109s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.585587502s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.156250000s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.16( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.887242317s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.457954407s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[3.16( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.887228012s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.457954407s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.11( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.450025558s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020816803s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.11( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.450011253s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020816803s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.12( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.449935913s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020751953s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.12( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.449899673s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020751953s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.1f( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.579477310s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active pruub 55.150356293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[4.1f( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=30 pruub=9.579464912s) [0] r=-1 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 55.150356293s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.1c( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.449832916s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020877838s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1f( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.449786186s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020847321s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.10( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.449657440s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 active pruub 57.020759583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.1f( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.449736595s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020847321s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[5.10( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.449639320s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020759583s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[6.1c( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=30 pruub=11.449788094s) [0] r=-1 lpr=30 pi=[26,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.020877838s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:50:27 compute-0 ceph-mon[74335]: from='client.14225 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:50:27 compute-0 ceph-mon[74335]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 09:50:27 compute-0 ceph-mon[74335]: Saving service ingress.rgw.default spec with placement count:2
Jan 23 09:50:27 compute-0 ceph-mon[74335]: 3.11 scrub starts
Jan 23 09:50:27 compute-0 ceph-mon[74335]: 3.11 scrub ok
Jan 23 09:50:27 compute-0 ceph-mon[74335]: 2.4 scrub starts
Jan 23 09:50:27 compute-0 ceph-mon[74335]: 2.4 scrub ok
Jan 23 09:50:27 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:27 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:27 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:27 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:27 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:27 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:27 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:50:27 compute-0 ceph-mon[74335]: 4.14 scrub starts
Jan 23 09:50:27 compute-0 ceph-mon[74335]: 4.14 scrub ok
Jan 23 09:50:27 compute-0 sudo[86336]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.19( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.1d( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.10( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.13( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.14( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.a( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.e( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.b( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.8( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.9( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.e( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.6( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.1( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.3( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.1e( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.2( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.f( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.1b( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.1e( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.4( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[2.1f( empty local-lis/les=0/0 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 30 pg[7.18( empty local-lis/les=0/0 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:50:27 compute-0 systemd[1]: libpod-conmon-6b0cce378ff208dd8c0c5920e6615a04189d05585b40d744c531ea95730c77aa.scope: Deactivated successfully.
Jan 23 09:50:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 1 active, 1 active+clean+scrubbing, 191 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:27 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 23 09:50:27 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 23 09:50:27 compute-0 python3[86467]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:50:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 23 09:50:28 compute-0 python3[86538]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769161827.6214647-37444-77183829441616/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:50:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Jan 23 09:50:28 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.1e( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.1b( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.18( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.1e( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.1f( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.1b( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.9( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.6( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.6( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.4( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.2( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.1( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.3( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.4( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.e( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.f( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.a( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.9( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.c( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.d( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.8( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.a( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.b( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.e( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.14( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.10( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.13( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.10( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.15( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.13( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[7.1d( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=27/27 les/c/f=28/28/0 sis=30) [1] r=0 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 31 pg[2.19( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=24/24 les/c/f=25/25/0 sis=30) [1] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:50:28 compute-0 ceph-mon[74335]: pgmap v90: 193 pgs: 1 active, 1 active+clean+scrubbing, 191 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:28 compute-0 ceph-mon[74335]: 2.1 scrub starts
Jan 23 09:50:28 compute-0 ceph-mon[74335]: 2.1 scrub ok
Jan 23 09:50:28 compute-0 ceph-mon[74335]: 3.12 scrub starts
Jan 23 09:50:28 compute-0 ceph-mon[74335]: 3.12 scrub ok
Jan 23 09:50:28 compute-0 ceph-mon[74335]: 2.3 scrub starts
Jan 23 09:50:28 compute-0 ceph-mon[74335]: 2.3 scrub ok
Jan 23 09:50:28 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:28 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:28 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:28 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:28 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:28 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:50:28 compute-0 ceph-mon[74335]: osdmap e30: 2 total, 2 up, 2 in
Jan 23 09:50:28 compute-0 ceph-mon[74335]: 3.1f scrub starts
Jan 23 09:50:28 compute-0 ceph-mon[74335]: 3.1f scrub ok
Jan 23 09:50:28 compute-0 ceph-mon[74335]: osdmap e31: 2 total, 2 up, 2 in
Jan 23 09:50:28 compute-0 sudo[86586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eumcmpffwlusoopakggzborsqrqckuyl ; /usr/bin/python3'
Jan 23 09:50:28 compute-0 sudo[86586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:28 compute-0 python3[86588]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:28 compute-0 podman[86589]: 2026-01-23 09:50:28.99708361 +0000 UTC m=+0.065408876 container create 377cef915dcddd11e3ff1e39bcfdfde1e1bcbb63fabfb50d6847d2252ae84faa (image=quay.io/ceph/ceph:v19, name=nervous_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:50:29 compute-0 podman[86589]: 2026-01-23 09:50:28.95673478 +0000 UTC m=+0.025060076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:29 compute-0 systemd[1]: Started libpod-conmon-377cef915dcddd11e3ff1e39bcfdfde1e1bcbb63fabfb50d6847d2252ae84faa.scope.
Jan 23 09:50:29 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 23 09:50:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a3511d67f70a2c551cec1ec4aa9f066622e04c2d69adefa346515d1d8d8130/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a3511d67f70a2c551cec1ec4aa9f066622e04c2d69adefa346515d1d8d8130/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a3511d67f70a2c551cec1ec4aa9f066622e04c2d69adefa346515d1d8d8130/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:29 compute-0 podman[86589]: 2026-01-23 09:50:29.121122497 +0000 UTC m=+0.189447783 container init 377cef915dcddd11e3ff1e39bcfdfde1e1bcbb63fabfb50d6847d2252ae84faa (image=quay.io/ceph/ceph:v19, name=nervous_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:29 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 23 09:50:29 compute-0 podman[86589]: 2026-01-23 09:50:29.127786907 +0000 UTC m=+0.196112173 container start 377cef915dcddd11e3ff1e39bcfdfde1e1bcbb63fabfb50d6847d2252ae84faa (image=quay.io/ceph/ceph:v19, name=nervous_rhodes, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 09:50:29 compute-0 podman[86589]: 2026-01-23 09:50:29.145838371 +0000 UTC m=+0.214163667 container attach 377cef915dcddd11e3ff1e39bcfdfde1e1bcbb63fabfb50d6847d2252ae84faa (image=quay.io/ceph/ceph:v19, name=nervous_rhodes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:50:29 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14227 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:50:29 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service node-exporter spec with placement *
Jan 23 09:50:29 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Jan 23 09:50:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 23 09:50:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:29 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Jan 23 09:50:29 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Jan 23 09:50:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v94: 193 pgs: 1 active, 192 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 23 09:50:29 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 23 09:50:29 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 23 09:50:29 compute-0 ceph-mon[74335]: pgmap v92: 193 pgs: 1 active, 1 active+clean+scrubbing, 191 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:29 compute-0 ceph-mon[74335]: 7.1f scrub starts
Jan 23 09:50:29 compute-0 ceph-mon[74335]: 7.1f scrub ok
Jan 23 09:50:29 compute-0 ceph-mon[74335]: from='client.14227 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:50:29 compute-0 ceph-mon[74335]: Saving service node-exporter spec with placement *
Jan 23 09:50:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:30 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Jan 23 09:50:30 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Jan 23 09:50:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 23 09:50:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:30 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Jan 23 09:50:30 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Jan 23 09:50:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 23 09:50:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:30 compute-0 nervous_rhodes[86605]: Scheduled node-exporter update...
Jan 23 09:50:30 compute-0 nervous_rhodes[86605]: Scheduled grafana update...
Jan 23 09:50:30 compute-0 nervous_rhodes[86605]: Scheduled prometheus update...
Jan 23 09:50:30 compute-0 nervous_rhodes[86605]: Scheduled alertmanager update...
Jan 23 09:50:30 compute-0 systemd[1]: libpod-377cef915dcddd11e3ff1e39bcfdfde1e1bcbb63fabfb50d6847d2252ae84faa.scope: Deactivated successfully.
Jan 23 09:50:30 compute-0 podman[86589]: 2026-01-23 09:50:30.057693828 +0000 UTC m=+1.126019094 container died 377cef915dcddd11e3ff1e39bcfdfde1e1bcbb63fabfb50d6847d2252ae84faa (image=quay.io/ceph/ceph:v19, name=nervous_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-26a3511d67f70a2c551cec1ec4aa9f066622e04c2d69adefa346515d1d8d8130-merged.mount: Deactivated successfully.
Jan 23 09:50:30 compute-0 podman[86589]: 2026-01-23 09:50:30.401971093 +0000 UTC m=+1.470296359 container remove 377cef915dcddd11e3ff1e39bcfdfde1e1bcbb63fabfb50d6847d2252ae84faa (image=quay.io/ceph/ceph:v19, name=nervous_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:50:30 compute-0 sudo[86586]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:30 compute-0 systemd[1]: libpod-conmon-377cef915dcddd11e3ff1e39bcfdfde1e1bcbb63fabfb50d6847d2252ae84faa.scope: Deactivated successfully.
Jan 23 09:50:30 compute-0 sudo[86666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yojbaxblsxmqjdnjkgbwbipttxxrmstg ; /usr/bin/python3'
Jan 23 09:50:30 compute-0 sudo[86666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:30 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Jan 23 09:50:30 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Jan 23 09:50:30 compute-0 python3[86668]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:31 compute-0 podman[86669]: 2026-01-23 09:50:30.977904913 +0000 UTC m=+0.026146476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:31 compute-0 podman[86669]: 2026-01-23 09:50:31.309539528 +0000 UTC m=+0.357781061 container create e0f569b191f95464eb3ff1803c7e7071695e3dd3042fbfe819c628440a466c1c (image=quay.io/ceph/ceph:v19, name=compassionate_bartik, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 4.19 scrub starts
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 7.1c scrub starts
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 7.1c scrub ok
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 4.19 scrub ok
Jan 23 09:50:31 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:31 compute-0 ceph-mon[74335]: Saving service grafana spec with placement compute-0;count:1
Jan 23 09:50:31 compute-0 ceph-mon[74335]: pgmap v94: 193 pgs: 1 active, 192 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 3.1e scrub starts
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 3.1e scrub ok
Jan 23 09:50:31 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:31 compute-0 ceph-mon[74335]: Saving service prometheus spec with placement compute-0;count:1
Jan 23 09:50:31 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:31 compute-0 ceph-mon[74335]: Saving service alertmanager spec with placement compute-0;count:1
Jan 23 09:50:31 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 2.18 scrub starts
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 2.18 scrub ok
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 6.1b scrub starts
Jan 23 09:50:31 compute-0 ceph-mon[74335]: 6.1b scrub ok
Jan 23 09:50:31 compute-0 systemd[1]: Started libpod-conmon-e0f569b191f95464eb3ff1803c7e7071695e3dd3042fbfe819c628440a466c1c.scope.
Jan 23 09:50:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/916f299c8f90329e7efa15515e9e6f7f4a9fdaf0365f299e493f8845333a57b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/916f299c8f90329e7efa15515e9e6f7f4a9fdaf0365f299e493f8845333a57b8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/916f299c8f90329e7efa15515e9e6f7f4a9fdaf0365f299e493f8845333a57b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:31 compute-0 podman[86669]: 2026-01-23 09:50:31.658797835 +0000 UTC m=+0.707039388 container init e0f569b191f95464eb3ff1803c7e7071695e3dd3042fbfe819c628440a466c1c (image=quay.io/ceph/ceph:v19, name=compassionate_bartik, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 09:50:31 compute-0 podman[86669]: 2026-01-23 09:50:31.665807075 +0000 UTC m=+0.714048608 container start e0f569b191f95464eb3ff1803c7e7071695e3dd3042fbfe819c628440a466c1c (image=quay.io/ceph/ceph:v19, name=compassionate_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:50:31 compute-0 podman[86669]: 2026-01-23 09:50:31.669446799 +0000 UTC m=+0.717688332 container attach e0f569b191f95464eb3ff1803c7e7071695e3dd3042fbfe819c628440a466c1c (image=quay.io/ceph/ceph:v19, name=compassionate_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 09:50:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:31 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 23 09:50:31 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 23 09:50:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Jan 23 09:50:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1143624271' entity='client.admin' 
Jan 23 09:50:32 compute-0 systemd[1]: libpod-e0f569b191f95464eb3ff1803c7e7071695e3dd3042fbfe819c628440a466c1c.scope: Deactivated successfully.
Jan 23 09:50:32 compute-0 podman[86669]: 2026-01-23 09:50:32.2970255 +0000 UTC m=+1.345267053 container died e0f569b191f95464eb3ff1803c7e7071695e3dd3042fbfe819c628440a466c1c (image=quay.io/ceph/ceph:v19, name=compassionate_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 23 09:50:32 compute-0 ceph-mon[74335]: 2.17 scrub starts
Jan 23 09:50:32 compute-0 ceph-mon[74335]: 2.17 scrub ok
Jan 23 09:50:32 compute-0 ceph-mon[74335]: 6.18 scrub starts
Jan 23 09:50:32 compute-0 ceph-mon[74335]: 6.18 scrub ok
Jan 23 09:50:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1143624271' entity='client.admin' 
Jan 23 09:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-916f299c8f90329e7efa15515e9e6f7f4a9fdaf0365f299e493f8845333a57b8-merged.mount: Deactivated successfully.
Jan 23 09:50:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:32 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 23 09:50:32 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 23 09:50:32 compute-0 podman[86669]: 2026-01-23 09:50:32.876284884 +0000 UTC m=+1.924526417 container remove e0f569b191f95464eb3ff1803c7e7071695e3dd3042fbfe819c628440a466c1c (image=quay.io/ceph/ceph:v19, name=compassionate_bartik, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:50:32 compute-0 sudo[86666]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:32 compute-0 systemd[1]: libpod-conmon-e0f569b191f95464eb3ff1803c7e7071695e3dd3042fbfe819c628440a466c1c.scope: Deactivated successfully.
Jan 23 09:50:33 compute-0 sudo[86745]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akbeuloftsznfdvvqiffupyuhddxcwwl ; /usr/bin/python3'
Jan 23 09:50:33 compute-0 sudo[86745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:33 compute-0 python3[86747]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:33 compute-0 podman[86748]: 2026-01-23 09:50:33.25005343 +0000 UTC m=+0.028098162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:33 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 23 09:50:33 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 23 09:50:33 compute-0 podman[86748]: 2026-01-23 09:50:33.98767603 +0000 UTC m=+0.765720742 container create 13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c (image=quay.io/ceph/ceph:v19, name=optimistic_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 09:50:33 compute-0 ceph-mon[74335]: pgmap v95: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:33 compute-0 ceph-mon[74335]: 7.12 scrub starts
Jan 23 09:50:33 compute-0 ceph-mon[74335]: 7.12 scrub ok
Jan 23 09:50:33 compute-0 ceph-mon[74335]: 5.19 scrub starts
Jan 23 09:50:33 compute-0 ceph-mon[74335]: 5.19 scrub ok
Jan 23 09:50:34 compute-0 systemd[1]: Started libpod-conmon-13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c.scope.
Jan 23 09:50:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18db6f267e18c5cf498ebde1d9457daeabf5004a984086ab4eed316cb296316/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18db6f267e18c5cf498ebde1d9457daeabf5004a984086ab4eed316cb296316/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18db6f267e18c5cf498ebde1d9457daeabf5004a984086ab4eed316cb296316/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:34 compute-0 podman[86748]: 2026-01-23 09:50:34.055640038 +0000 UTC m=+0.833684770 container init 13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c (image=quay.io/ceph/ceph:v19, name=optimistic_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 09:50:34 compute-0 podman[86748]: 2026-01-23 09:50:34.063175632 +0000 UTC m=+0.841220344 container start 13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c (image=quay.io/ceph/ceph:v19, name=optimistic_archimedes, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:50:34 compute-0 podman[86748]: 2026-01-23 09:50:34.069830192 +0000 UTC m=+0.847874934 container attach 13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c (image=quay.io/ceph/ceph:v19, name=optimistic_archimedes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 09:50:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Jan 23 09:50:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3906855381' entity='client.admin' 
Jan 23 09:50:34 compute-0 systemd[1]: libpod-13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c.scope: Deactivated successfully.
Jan 23 09:50:34 compute-0 conmon[86763]: conmon 13a47d77cee6968cb567 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c.scope/container/memory.events
Jan 23 09:50:34 compute-0 podman[86748]: 2026-01-23 09:50:34.807465562 +0000 UTC m=+1.585510284 container died 13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c (image=quay.io/ceph/ceph:v19, name=optimistic_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 09:50:34 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 23 09:50:34 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 23 09:50:35 compute-0 ceph-mon[74335]: 2.16 scrub starts
Jan 23 09:50:35 compute-0 ceph-mon[74335]: 2.16 scrub ok
Jan 23 09:50:35 compute-0 ceph-mon[74335]: pgmap v96: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:35 compute-0 ceph-mon[74335]: 3.1b scrub starts
Jan 23 09:50:35 compute-0 ceph-mon[74335]: 3.1b scrub ok
Jan 23 09:50:35 compute-0 ceph-mon[74335]: 7.11 deep-scrub starts
Jan 23 09:50:35 compute-0 ceph-mon[74335]: 7.11 deep-scrub ok
Jan 23 09:50:35 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3906855381' entity='client.admin' 
Jan 23 09:50:35 compute-0 ceph-mon[74335]: 4.1c scrub starts
Jan 23 09:50:35 compute-0 ceph-mon[74335]: 4.1c scrub ok
Jan 23 09:50:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c18db6f267e18c5cf498ebde1d9457daeabf5004a984086ab4eed316cb296316-merged.mount: Deactivated successfully.
Jan 23 09:50:35 compute-0 podman[86748]: 2026-01-23 09:50:35.155853224 +0000 UTC m=+1.933897936 container remove 13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c (image=quay.io/ceph/ceph:v19, name=optimistic_archimedes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 23 09:50:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:50:35 compute-0 systemd[1]: libpod-conmon-13a47d77cee6968cb5674622293f2313a4658000fb5c72eee9412caab08b8a4c.scope: Deactivated successfully.
Jan 23 09:50:35 compute-0 sudo[86745]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:50:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:50:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:50:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 23 09:50:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 09:50:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:50:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:50:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:50:35 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:50:35 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:50:35 compute-0 sudo[86823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybcuwkmfgfdzxsdzqezypfmlabjhuthg ; /usr/bin/python3'
Jan 23 09:50:35 compute-0 sudo[86823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:35 compute-0 python3[86825]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:35 compute-0 podman[86826]: 2026-01-23 09:50:35.549826576 +0000 UTC m=+0.027439984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:35 compute-0 podman[86826]: 2026-01-23 09:50:35.74042855 +0000 UTC m=+0.218041938 container create 00bf9d00960e2ef1899a25235edceb575ddd703f54cc79c57e21f44afdb87f68 (image=quay.io/ceph/ceph:v19, name=vibrant_pike, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 09:50:35 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event ea9628d5-012d-4f51-8549-488983a8b408 (Global Recovery Event) in 22 seconds
Jan 23 09:50:35 compute-0 systemd[1]: Started libpod-conmon-00bf9d00960e2ef1899a25235edceb575ddd703f54cc79c57e21f44afdb87f68.scope.
Jan 23 09:50:35 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:50:35 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:50:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad232de47c605905c69600fd72e8ef355fae0342cd72e8dcf5ad42c8600c9f03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad232de47c605905c69600fd72e8ef355fae0342cd72e8dcf5ad42c8600c9f03/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad232de47c605905c69600fd72e8ef355fae0342cd72e8dcf5ad42c8600c9f03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:35 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 23 09:50:35 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 23 09:50:36 compute-0 podman[86826]: 2026-01-23 09:50:36.147222107 +0000 UTC m=+0.624835505 container init 00bf9d00960e2ef1899a25235edceb575ddd703f54cc79c57e21f44afdb87f68 (image=quay.io/ceph/ceph:v19, name=vibrant_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:36 compute-0 podman[86826]: 2026-01-23 09:50:36.154804313 +0000 UTC m=+0.632417701 container start 00bf9d00960e2ef1899a25235edceb575ddd703f54cc79c57e21f44afdb87f68 (image=quay.io/ceph/ceph:v19, name=vibrant_pike, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:50:36 compute-0 podman[86826]: 2026-01-23 09:50:36.17711391 +0000 UTC m=+0.654727328 container attach 00bf9d00960e2ef1899a25235edceb575ddd703f54cc79c57e21f44afdb87f68 (image=quay.io/ceph/ceph:v19, name=vibrant_pike, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:50:36 compute-0 ceph-mon[74335]: 2.14 scrub starts
Jan 23 09:50:36 compute-0 ceph-mon[74335]: 2.14 scrub ok
Jan 23 09:50:36 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:36 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:36 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:36 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:36 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 09:50:36 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:36 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:50:36 compute-0 ceph-mon[74335]: Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:50:36 compute-0 ceph-mon[74335]: 5.1d scrub starts
Jan 23 09:50:36 compute-0 ceph-mon[74335]: 5.1d scrub ok
Jan 23 09:50:36 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:50:36 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:50:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Jan 23 09:50:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2854364725' entity='client.admin' 
Jan 23 09:50:36 compute-0 systemd[1]: libpod-00bf9d00960e2ef1899a25235edceb575ddd703f54cc79c57e21f44afdb87f68.scope: Deactivated successfully.
Jan 23 09:50:36 compute-0 podman[86826]: 2026-01-23 09:50:36.536005572 +0000 UTC m=+1.013618950 container died 00bf9d00960e2ef1899a25235edceb575ddd703f54cc79c57e21f44afdb87f68 (image=quay.io/ceph/ceph:v19, name=vibrant_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 09:50:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad232de47c605905c69600fd72e8ef355fae0342cd72e8dcf5ad42c8600c9f03-merged.mount: Deactivated successfully.
Jan 23 09:50:36 compute-0 podman[86826]: 2026-01-23 09:50:36.674262653 +0000 UTC m=+1.151876041 container remove 00bf9d00960e2ef1899a25235edceb575ddd703f54cc79c57e21f44afdb87f68 (image=quay.io/ceph/ceph:v19, name=vibrant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:50:36 compute-0 systemd[1]: libpod-conmon-00bf9d00960e2ef1899a25235edceb575ddd703f54cc79c57e21f44afdb87f68.scope: Deactivated successfully.
Jan 23 09:50:36 compute-0 sudo[86823]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:36 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:50:36 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:50:36 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.1f deep-scrub starts
Jan 23 09:50:36 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.1f deep-scrub ok
Jan 23 09:50:37 compute-0 sudo[86901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeucwtzxwmjjcrdmytjmuxvnetqsujcv ; /usr/bin/python3'
Jan 23 09:50:37 compute-0 sudo[86901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:37 compute-0 python3[86903]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:50:37 compute-0 sudo[86901]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:50:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:50:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:37 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 3d8766b5-4164-4751-97d4-b443d97f5383 (Updating mon deployment (+2 -> 3))
Jan 23 09:50:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 23 09:50:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:50:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 23 09:50:37 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:50:37 compute-0 ceph-mon[74335]: pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:37 compute-0 ceph-mon[74335]: Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:50:37 compute-0 ceph-mon[74335]: 2.12 scrub starts
Jan 23 09:50:37 compute-0 ceph-mon[74335]: 2.12 scrub ok
Jan 23 09:50:37 compute-0 ceph-mon[74335]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:50:37 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2854364725' entity='client.admin' 
Jan 23 09:50:37 compute-0 ceph-mon[74335]: 6.1f deep-scrub starts
Jan 23 09:50:37 compute-0 ceph-mon[74335]: 6.1f deep-scrub ok
Jan 23 09:50:37 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:37 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:37 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:50:37 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:37 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 23 09:50:37 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 23 09:50:37 compute-0 sudo[86940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkjexzwmcxhjymoflqtdxpqtuzdenzaf ; /usr/bin/python3'
Jan 23 09:50:37 compute-0 sudo[86940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:37 compute-0 python3[86942]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.nbdygh/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:37 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 23 09:50:37 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 23 09:50:37 compute-0 podman[86943]: 2026-01-23 09:50:37.885411533 +0000 UTC m=+0.025027485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:38 compute-0 podman[86943]: 2026-01-23 09:50:38.006091153 +0000 UTC m=+0.145707085 container create 9bc9095f6005110115c87ed7126ee43d28a761cfdd1404906e8fe63c84beb30e (image=quay.io/ceph/ceph:v19, name=busy_nightingale, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:50:38 compute-0 systemd[1]: Started libpod-conmon-9bc9095f6005110115c87ed7126ee43d28a761cfdd1404906e8fe63c84beb30e.scope.
Jan 23 09:50:38 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970ace26b0b9e71ba51941dd9e20fa241497f8f30b17065aa69b48d23872a215/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970ace26b0b9e71ba51941dd9e20fa241497f8f30b17065aa69b48d23872a215/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970ace26b0b9e71ba51941dd9e20fa241497f8f30b17065aa69b48d23872a215/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:38 compute-0 podman[86943]: 2026-01-23 09:50:38.267897397 +0000 UTC m=+0.407513349 container init 9bc9095f6005110115c87ed7126ee43d28a761cfdd1404906e8fe63c84beb30e (image=quay.io/ceph/ceph:v19, name=busy_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 09:50:38 compute-0 podman[86943]: 2026-01-23 09:50:38.27536667 +0000 UTC m=+0.414982602 container start 9bc9095f6005110115c87ed7126ee43d28a761cfdd1404906e8fe63c84beb30e (image=quay.io/ceph/ceph:v19, name=busy_nightingale, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:38 compute-0 podman[86943]: 2026-01-23 09:50:38.280821056 +0000 UTC m=+0.420436988 container attach 9bc9095f6005110115c87ed7126ee43d28a761cfdd1404906e8fe63c84beb30e (image=quay.io/ceph/ceph:v19, name=busy_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 09:50:38 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 23 09:50:38 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 23 09:50:38 compute-0 ceph-mon[74335]: Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:50:38 compute-0 ceph-mon[74335]: 7.17 deep-scrub starts
Jan 23 09:50:38 compute-0 ceph-mon[74335]: 7.17 deep-scrub ok
Jan 23 09:50:38 compute-0 ceph-mon[74335]: pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:38 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:50:38 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:50:38 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:38 compute-0 ceph-mon[74335]: Deploying daemon mon.compute-2 on compute-2
Jan 23 09:50:38 compute-0 ceph-mon[74335]: 4.1d scrub starts
Jan 23 09:50:38 compute-0 ceph-mon[74335]: 4.1d scrub ok
Jan 23 09:50:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.nbdygh/server_addr}] v 0)
Jan 23 09:50:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2852887520' entity='client.admin' 
Jan 23 09:50:38 compute-0 systemd[1]: libpod-9bc9095f6005110115c87ed7126ee43d28a761cfdd1404906e8fe63c84beb30e.scope: Deactivated successfully.
Jan 23 09:50:38 compute-0 podman[86943]: 2026-01-23 09:50:38.662394674 +0000 UTC m=+0.802010606 container died 9bc9095f6005110115c87ed7126ee43d28a761cfdd1404906e8fe63c84beb30e (image=quay.io/ceph/ceph:v19, name=busy_nightingale, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:50:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:50:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:50:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:50:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:50:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:50:38 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:50:38 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 23 09:50:38 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 23 09:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-970ace26b0b9e71ba51941dd9e20fa241497f8f30b17065aa69b48d23872a215-merged.mount: Deactivated successfully.
Jan 23 09:50:39 compute-0 podman[86943]: 2026-01-23 09:50:39.435479934 +0000 UTC m=+1.575095866 container remove 9bc9095f6005110115c87ed7126ee43d28a761cfdd1404906e8fe63c84beb30e (image=quay.io/ceph/ceph:v19, name=busy_nightingale, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 23 09:50:39 compute-0 systemd[1]: libpod-conmon-9bc9095f6005110115c87ed7126ee43d28a761cfdd1404906e8fe63c84beb30e.scope: Deactivated successfully.
Jan 23 09:50:39 compute-0 sudo[86940]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:39 compute-0 ceph-mon[74335]: 2.11 scrub starts
Jan 23 09:50:39 compute-0 ceph-mon[74335]: 2.11 scrub ok
Jan 23 09:50:39 compute-0 ceph-mon[74335]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 23 09:50:39 compute-0 ceph-mon[74335]: Cluster is now healthy
Jan 23 09:50:39 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2852887520' entity='client.admin' 
Jan 23 09:50:39 compute-0 ceph-mon[74335]: 6.c scrub starts
Jan 23 09:50:39 compute-0 ceph-mon[74335]: 6.c scrub ok
Jan 23 09:50:39 compute-0 ceph-mon[74335]: pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:39 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 23 09:50:39 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:50:40 compute-0 sudo[87018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rawzmbzifqrhfmqbibqpssfcwlwhsvym ; /usr/bin/python3'
Jan 23 09:50:40 compute-0 sudo[87018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:40 compute-0 python3[87020]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard//server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 23 09:50:40 compute-0 podman[87021]: 2026-01-23 09:50:40.501082795 +0000 UTC m=+0.118958913 container create 27e914734233dfa845596d7a732b950a6e8ba6779cd310a1e21a8d56493ef488 (image=quay.io/ceph/ceph:v19, name=keen_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:50:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:50:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 09:50:40 compute-0 podman[87021]: 2026-01-23 09:50:40.415587407 +0000 UTC m=+0.033463545 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:40 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3067888333; not ready for session (expect reconnect)
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:50:40 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:40 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 23 09:50:40 compute-0 systemd[1]: Started libpod-conmon-27e914734233dfa845596d7a732b950a6e8ba6779cd310a1e21a8d56493ef488.scope.
Jan 23 09:50:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 23 09:50:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 23 09:50:40 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 09:50:40 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:50:40 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:40 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 09:50:40 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 23 09:50:40 compute-0 ceph-mon[74335]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:50:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ba14bd035b50b7f0197c10efdcaed64fd507fe3a08f9e193440526b8ed93dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ba14bd035b50b7f0197c10efdcaed64fd507fe3a08f9e193440526b8ed93dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ba14bd035b50b7f0197c10efdcaed64fd507fe3a08f9e193440526b8ed93dd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:40 compute-0 podman[87021]: 2026-01-23 09:50:40.610830473 +0000 UTC m=+0.228706601 container init 27e914734233dfa845596d7a732b950a6e8ba6779cd310a1e21a8d56493ef488 (image=quay.io/ceph/ceph:v19, name=keen_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 09:50:40 compute-0 podman[87021]: 2026-01-23 09:50:40.61844884 +0000 UTC m=+0.236324948 container start 27e914734233dfa845596d7a732b950a6e8ba6779cd310a1e21a8d56493ef488 (image=quay.io/ceph/ceph:v19, name=keen_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:50:40 compute-0 podman[87021]: 2026-01-23 09:50:40.628097505 +0000 UTC m=+0.245973653 container attach 27e914734233dfa845596d7a732b950a6e8ba6779cd310a1e21a8d56493ef488 (image=quay.io/ceph/ceph:v19, name=keen_leakey, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 09:50:40 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 9 completed events
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:50:40 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 23 09:50:40 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 23 09:50:40 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 09:50:41 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 09:50:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:41 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3067888333; not ready for session (expect reconnect)
Jan 23 09:50:41 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:50:41 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:41 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 09:50:41 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts
Jan 23 09:50:41 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok
Jan 23 09:50:42 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 09:50:42 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3067888333; not ready for session (expect reconnect)
Jan 23 09:50:42 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:50:42 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:42 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 09:50:42 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 23 09:50:42 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 23 09:50:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:43 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3067888333; not ready for session (expect reconnect)
Jan 23 09:50:43 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:50:43 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:43 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 09:50:43 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 09:50:43 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 23 09:50:43 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 23 09:50:43 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 09:50:44 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 09:50:44 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3067888333; not ready for session (expect reconnect)
Jan 23 09:50:44 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:50:44 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:44 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 09:50:44 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 23 09:50:44 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 09:50:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:45 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3067888333; not ready for session (expect reconnect)
Jan 23 09:50:45 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 09:50:45 compute-0 ceph-mon[74335]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 23 09:50:45 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : monmap epoch 2
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : last_changed 2026-01-23T09:50:40.551249+0000
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : created 2026-01-23T09:47:35.499222+0000
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 23 09:50:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 09:50:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 23 09:50:45 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 23 09:50:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 2.f scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 2.f scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mon[74335]: mon.compute-0 calling monitor election
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 4.f scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 4.f scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 2.b scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 2.b scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:45 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 4.3 deep-scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 4.3 deep-scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 7.16 scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 7.16 scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mon[74335]: mon.compute-2 calling monitor election
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 3.4 scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 3.4 scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 7.5 deep-scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 7.5 deep-scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:45 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 6.1 scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 6.1 scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 7.0 scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 7.0 scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 6.6 scrub starts
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 6.6 scrub ok
Jan 23 09:50:45 compute-0 ceph-mon[74335]: pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:45 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mon[74335]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 23 09:50:45 compute-0 ceph-mon[74335]: monmap epoch 2
Jan 23 09:50:45 compute-0 ceph-mon[74335]: fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:50:45 compute-0 ceph-mon[74335]: last_changed 2026-01-23T09:50:40.551249+0000
Jan 23 09:50:45 compute-0 ceph-mon[74335]: created 2026-01-23T09:47:35.499222+0000
Jan 23 09:50:45 compute-0 ceph-mon[74335]: min_mon_release 19 (squid)
Jan 23 09:50:45 compute-0 ceph-mon[74335]: election_strategy: 1
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 09:50:45 compute-0 ceph-mon[74335]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 23 09:50:45 compute-0 ceph-mon[74335]: fsmap 
Jan 23 09:50:45 compute-0 ceph-mon[74335]: osdmap e31: 2 total, 2 up, 2 in
Jan 23 09:50:45 compute-0 ceph-mon[74335]: mgrmap e9: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:45 compute-0 ceph-mon[74335]: overall HEALTH_OK
Jan 23 09:50:45 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:45 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:45 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 23 09:50:45 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 23 09:50:46 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3067888333; not ready for session (expect reconnect)
Jan 23 09:50:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:50:46 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:46 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 23 09:50:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard//server_addr}] v 0)
Jan 23 09:50:46 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 23 09:50:46 compute-0 ceph-mon[74335]: 2.5 scrub starts
Jan 23 09:50:46 compute-0 ceph-mon[74335]: 2.5 scrub ok
Jan 23 09:50:46 compute-0 ceph-mon[74335]: Deploying daemon mon.compute-1 on compute-1
Jan 23 09:50:46 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/807772850' entity='client.admin' 
Jan 23 09:50:47 compute-0 systemd[1]: libpod-27e914734233dfa845596d7a732b950a6e8ba6779cd310a1e21a8d56493ef488.scope: Deactivated successfully.
Jan 23 09:50:47 compute-0 podman[87021]: 2026-01-23 09:50:47.396424638 +0000 UTC m=+7.014300756 container died 27e914734233dfa845596d7a732b950a6e8ba6779cd310a1e21a8d56493ef488 (image=quay.io/ceph/ceph:v19, name=keen_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:50:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:47 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3690398564; not ready for session (expect reconnect)
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:50:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:47 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 23 09:50:47 compute-0 ceph-mgr[74633]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 23 09:50:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:50:47.546+0000 7fa567cd4640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(probing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 09:50:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:50:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:50:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:47 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 23 09:50:47 compute-0 ceph-mon[74335]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 23 09:50:47 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 09:50:47 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:50:47 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 23 09:50:47 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 23 09:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-82ba14bd035b50b7f0197c10efdcaed64fd507fe3a08f9e193440526b8ed93dd-merged.mount: Deactivated successfully.
Jan 23 09:50:48 compute-0 podman[87021]: 2026-01-23 09:50:48.429728837 +0000 UTC m=+8.047604975 container remove 27e914734233dfa845596d7a732b950a6e8ba6779cd310a1e21a8d56493ef488 (image=quay.io/ceph/ceph:v19, name=keen_leakey, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:50:48 compute-0 sudo[87018]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:48 compute-0 systemd[1]: libpod-conmon-27e914734233dfa845596d7a732b950a6e8ba6779cd310a1e21a8d56493ef488.scope: Deactivated successfully.
Jan 23 09:50:48 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3690398564; not ready for session (expect reconnect)
Jan 23 09:50:48 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:50:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:48 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 09:50:48 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 23 09:50:48 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 23 09:50:49 compute-0 sudo[87096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuxzbhqlfhucvazrbzaknchczwshxpyn ; /usr/bin/python3'
Jan 23 09:50:49 compute-0 sudo[87096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:49 compute-0 python3[87098]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard//server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:49 compute-0 podman[87099]: 2026-01-23 09:50:49.499587667 +0000 UTC m=+0.068114852 container create 297934f7535489d8c3d46311316809ff54f33ac178a6e2509c802e42c11dba7b (image=quay.io/ceph/ceph:v19, name=ecstatic_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:49 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3690398564; not ready for session (expect reconnect)
Jan 23 09:50:49 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:50:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:49 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 09:50:49 compute-0 podman[87099]: 2026-01-23 09:50:49.458071114 +0000 UTC m=+0.026598319 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:49 compute-0 systemd[1]: Started libpod-conmon-297934f7535489d8c3d46311316809ff54f33ac178a6e2509c802e42c11dba7b.scope.
Jan 23 09:50:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9af9a53de4507b389350aede62347f9e9a1a9a365ce0ef2635d5393c237c2ec/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9af9a53de4507b389350aede62347f9e9a1a9a365ce0ef2635d5393c237c2ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9af9a53de4507b389350aede62347f9e9a1a9a365ce0ef2635d5393c237c2ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:49 compute-0 podman[87099]: 2026-01-23 09:50:49.698424516 +0000 UTC m=+0.266951721 container init 297934f7535489d8c3d46311316809ff54f33ac178a6e2509c802e42c11dba7b (image=quay.io/ceph/ceph:v19, name=ecstatic_germain, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:49 compute-0 podman[87099]: 2026-01-23 09:50:49.704720246 +0000 UTC m=+0.273247431 container start 297934f7535489d8c3d46311316809ff54f33ac178a6e2509c802e42c11dba7b (image=quay.io/ceph/ceph:v19, name=ecstatic_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 09:50:49 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 09:50:49 compute-0 podman[87099]: 2026-01-23 09:50:49.871556692 +0000 UTC m=+0.440084017 container attach 297934f7535489d8c3d46311316809ff54f33ac178a6e2509c802e42c11dba7b (image=quay.io/ceph/ceph:v19, name=ecstatic_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 09:50:49 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 23 09:50:49 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 23 09:50:50 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 09:50:50 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 09:50:50 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3690398564; not ready for session (expect reconnect)
Jan 23 09:50:50 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:50:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:50 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 09:50:50 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Jan 23 09:50:50 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Jan 23 09:50:51 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 09:50:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:51 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3690398564; not ready for session (expect reconnect)
Jan 23 09:50:51 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:50:51 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:51 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 09:50:51 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 23 09:50:51 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 23 09:50:52 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3690398564; not ready for session (expect reconnect)
Jan 23 09:50:52 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:52 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 09:50:52 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 09:50:52 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 23 09:50:52 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 23 09:50:52 compute-0 ceph-mon[74335]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 23 09:50:52 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : monmap epoch 3
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : last_changed 2026-01-23T09:50:47.540109+0000
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : created 2026-01-23T09:47:35.499222+0000
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 23 09:50:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 09:50:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:53 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 3d8766b5-4164-4751-97d4-b443d97f5383 (Updating mon deployment (+2 -> 3))
Jan 23 09:50:53 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 3d8766b5-4164-4751-97d4-b443d97f5383 (Updating mon deployment (+2 -> 3)) in 16 seconds
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 3.2 scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 3.2 scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-0 calling monitor election
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-2 calling monitor election
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 4.4 scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 4.4 scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 7.d scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 7.d scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 4.6 scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 4.6 scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:53 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-1 calling monitor election
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 7.c scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 7.c scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 3.1 scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 3.1 scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 2.1a deep-scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 2.1a deep-scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 6.4 deep-scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 6.4 deep-scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 7.19 scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 7.19 scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:53 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 4.2 scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 4.2 scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 7.1a scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 7.1a scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 6.0 scrub starts
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 6.0 scrub ok
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: monmap epoch 3
Jan 23 09:50:53 compute-0 ceph-mon[74335]: fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:50:53 compute-0 ceph-mon[74335]: last_changed 2026-01-23T09:50:47.540109+0000
Jan 23 09:50:53 compute-0 ceph-mon[74335]: created 2026-01-23T09:47:35.499222+0000
Jan 23 09:50:53 compute-0 ceph-mon[74335]: min_mon_release 19 (squid)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: election_strategy: 1
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 23 09:50:53 compute-0 ceph-mon[74335]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 23 09:50:53 compute-0 ceph-mon[74335]: fsmap 
Jan 23 09:50:53 compute-0 ceph-mon[74335]: osdmap e31: 2 total, 2 up, 2 in
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mgrmap e9: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: overall HEALTH_OK
Jan 23 09:50:53 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:53 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev f85e257e-183d-43ca-bcb8-a5e181f805e3 (Updating mgr deployment (+2 -> 3))
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.uczrot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uczrot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uczrot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:53 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.uczrot on compute-2
Jan 23 09:50:53 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.uczrot on compute-2
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard//server_addr}] v 0)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4282911488' entity='client.admin' 
Jan 23 09:50:53 compute-0 systemd[1]: libpod-297934f7535489d8c3d46311316809ff54f33ac178a6e2509c802e42c11dba7b.scope: Deactivated successfully.
Jan 23 09:50:53 compute-0 podman[87099]: 2026-01-23 09:50:53.32394683 +0000 UTC m=+3.892474015 container died 297934f7535489d8c3d46311316809ff54f33ac178a6e2509c802e42c11dba7b (image=quay.io/ceph/ceph:v19, name=ecstatic_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9af9a53de4507b389350aede62347f9e9a1a9a365ce0ef2635d5393c237c2ec-merged.mount: Deactivated successfully.
Jan 23 09:50:53 compute-0 podman[87099]: 2026-01-23 09:50:53.409138599 +0000 UTC m=+3.977665784 container remove 297934f7535489d8c3d46311316809ff54f33ac178a6e2509c802e42c11dba7b (image=quay.io/ceph/ceph:v19, name=ecstatic_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 23 09:50:53 compute-0 systemd[1]: libpod-conmon-297934f7535489d8c3d46311316809ff54f33ac178a6e2509c802e42c11dba7b.scope: Deactivated successfully.
Jan 23 09:50:53 compute-0 sudo[87096]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:53 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3690398564; not ready for session (expect reconnect)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:50:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:53 compute-0 sudo[87175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmsrtjerqvczwhqrovurjdfnoorilxlw ; /usr/bin/python3'
Jan 23 09:50:53 compute-0 sudo[87175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:53 compute-0 python3[87177]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:53 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 23 09:50:53 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 23 09:50:53 compute-0 podman[87178]: 2026-01-23 09:50:53.807288201 +0000 UTC m=+0.029206648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:53 compute-0 podman[87178]: 2026-01-23 09:50:53.935633213 +0000 UTC m=+0.157551630 container create e073788514a26ed7f278981fcde3b411b22d0c167bb0b1dcc188ef349b2e12db (image=quay.io/ceph/ceph:v19, name=condescending_swanson, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 09:50:54 compute-0 systemd[1]: Started libpod-conmon-e073788514a26ed7f278981fcde3b411b22d0c167bb0b1dcc188ef349b2e12db.scope.
Jan 23 09:50:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e581230d97475241877f714139712692822d673c2304b5ce0816c71c79a227c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e581230d97475241877f714139712692822d673c2304b5ce0816c71c79a227c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e581230d97475241877f714139712692822d673c2304b5ce0816c71c79a227c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uczrot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 09:50:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uczrot", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 09:50:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 09:50:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:54 compute-0 ceph-mon[74335]: Deploying daemon mgr.compute-2.uczrot on compute-2
Jan 23 09:50:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4282911488' entity='client.admin' 
Jan 23 09:50:54 compute-0 ceph-mon[74335]: 5.18 scrub starts
Jan 23 09:50:54 compute-0 ceph-mon[74335]: 5.18 scrub ok
Jan 23 09:50:54 compute-0 ceph-mon[74335]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:54 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:50:54 compute-0 ceph-mon[74335]: 5.3 scrub starts
Jan 23 09:50:54 compute-0 ceph-mon[74335]: 5.3 scrub ok
Jan 23 09:50:54 compute-0 podman[87178]: 2026-01-23 09:50:54.287662225 +0000 UTC m=+0.509580672 container init e073788514a26ed7f278981fcde3b411b22d0c167bb0b1dcc188ef349b2e12db (image=quay.io/ceph/ceph:v19, name=condescending_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:50:54 compute-0 podman[87178]: 2026-01-23 09:50:54.294464059 +0000 UTC m=+0.516382476 container start e073788514a26ed7f278981fcde3b411b22d0c167bb0b1dcc188ef349b2e12db (image=quay.io/ceph/ceph:v19, name=condescending_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 23 09:50:54 compute-0 podman[87178]: 2026-01-23 09:50:54.417133096 +0000 UTC m=+0.639051503 container attach e073788514a26ed7f278981fcde3b411b22d0c167bb0b1dcc188ef349b2e12db (image=quay.io/ceph/ceph:v19, name=condescending_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:50:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:50:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:50:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 23 09:50:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3189222711' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 23 09:50:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 09:50:54 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 23 09:50:54 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 23 09:50:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.jmakme", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 23 09:50:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jmakme", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 09:50:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jmakme", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 09:50:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 09:50:55 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 09:50:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:50:55 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:55 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.jmakme on compute-1
Jan 23 09:50:55 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.jmakme on compute-1
Jan 23 09:50:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:55 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 10 completed events
Jan 23 09:50:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:50:55 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 23 09:50:55 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 23 09:50:56 compute-0 ceph-mon[74335]: 4.18 deep-scrub starts
Jan 23 09:50:56 compute-0 ceph-mon[74335]: 4.18 deep-scrub ok
Jan 23 09:50:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:56 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3189222711' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 23 09:50:56 compute-0 ceph-mon[74335]: 3.6 scrub starts
Jan 23 09:50:56 compute-0 ceph-mon[74335]: 3.6 scrub ok
Jan 23 09:50:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jmakme", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 09:50:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jmakme", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 09:50:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 09:50:56 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:56 compute-0 ceph-mon[74335]: Deploying daemon mgr.compute-1.jmakme on compute-1
Jan 23 09:50:56 compute-0 ceph-mon[74335]: pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3189222711' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 23 09:50:56 compute-0 condescending_swanson[87194]: module 'dashboard' is already disabled
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:56 compute-0 systemd[1]: libpod-e073788514a26ed7f278981fcde3b411b22d0c167bb0b1dcc188ef349b2e12db.scope: Deactivated successfully.
Jan 23 09:50:56 compute-0 podman[87178]: 2026-01-23 09:50:56.167318883 +0000 UTC m=+2.389237300 container died e073788514a26ed7f278981fcde3b411b22d0c167bb0b1dcc188ef349b2e12db (image=quay.io/ceph/ceph:v19, name=condescending_swanson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 09:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e581230d97475241877f714139712692822d673c2304b5ce0816c71c79a227c-merged.mount: Deactivated successfully.
Jan 23 09:50:56 compute-0 podman[87178]: 2026-01-23 09:50:56.211265187 +0000 UTC m=+2.433183604 container remove e073788514a26ed7f278981fcde3b411b22d0c167bb0b1dcc188ef349b2e12db (image=quay.io/ceph/ceph:v19, name=condescending_swanson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 09:50:56 compute-0 systemd[1]: libpod-conmon-e073788514a26ed7f278981fcde3b411b22d0c167bb0b1dcc188ef349b2e12db.scope: Deactivated successfully.
Jan 23 09:50:56 compute-0 sudo[87175]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:56 compute-0 sudo[87253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fohtirpoclsrijyehqoqckhknwfwluio ; /usr/bin/python3'
Jan 23 09:50:56 compute-0 sudo[87253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:56 compute-0 python3[87255]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:56 compute-0 podman[87256]: 2026-01-23 09:50:56.628623159 +0000 UTC m=+0.069319483 container create c114f802ad2711cee8f87e071e443e90564b38a3f9b60694b9e9cd1977ac7e48 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 09:50:56 compute-0 systemd[1]: Started libpod-conmon-c114f802ad2711cee8f87e071e443e90564b38a3f9b60694b9e9cd1977ac7e48.scope.
Jan 23 09:50:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:56 compute-0 podman[87256]: 2026-01-23 09:50:56.581528535 +0000 UTC m=+0.022224889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b47f770c17097fd38c3e5305a175a21aa9f2fa9c55e902f85123b9c5803c6cd9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b47f770c17097fd38c3e5305a175a21aa9f2fa9c55e902f85123b9c5803c6cd9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b47f770c17097fd38c3e5305a175a21aa9f2fa9c55e902f85123b9c5803c6cd9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:56 compute-0 podman[87256]: 2026-01-23 09:50:56.69083824 +0000 UTC m=+0.131534584 container init c114f802ad2711cee8f87e071e443e90564b38a3f9b60694b9e9cd1977ac7e48 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 09:50:56 compute-0 podman[87256]: 2026-01-23 09:50:56.697204192 +0000 UTC m=+0.137900516 container start c114f802ad2711cee8f87e071e443e90564b38a3f9b60694b9e9cd1977ac7e48 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:56 compute-0 podman[87256]: 2026-01-23 09:50:56.700275511 +0000 UTC m=+0.140971855 container attach c114f802ad2711cee8f87e071e443e90564b38a3f9b60694b9e9cd1977ac7e48 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 09:50:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:56 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Jan 23 09:50:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 09:50:56 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:56 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev f85e257e-183d-43ca-bcb8-a5e181f805e3 (Updating mgr deployment (+2 -> 3))
Jan 23 09:50:56 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event f85e257e-183d-43ca-bcb8-a5e181f805e3 (Updating mgr deployment (+2 -> 3)) in 4 seconds
Jan 23 09:50:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:56 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 9d98c16d-c38d-44c5-97f3-a4b5e4c640f9 (Updating crash deployment (+1 -> 3))
Jan 23 09:50:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 09:50:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:50:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:56 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 23 09:50:56 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 23 09:50:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 23 09:50:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1618362368' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 23 09:50:57 compute-0 ceph-mon[74335]: 6.19 scrub starts
Jan 23 09:50:57 compute-0 ceph-mon[74335]: 6.19 scrub ok
Jan 23 09:50:57 compute-0 ceph-mon[74335]: 5.0 scrub starts
Jan 23 09:50:57 compute-0 ceph-mon[74335]: 5.0 scrub ok
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3189222711' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:57 compute-0 ceph-mon[74335]: mgrmap e10: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:57 compute-0 ceph-mon[74335]: 3.1c scrub starts
Jan 23 09:50:57 compute-0 ceph-mon[74335]: 3.1c scrub ok
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:57 compute-0 ceph-mon[74335]: 3.7 deep-scrub starts
Jan 23 09:50:57 compute-0 ceph-mon[74335]: 3.7 deep-scrub ok
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' 
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='mgr.14122 192.168.122.100:0/615021264' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:50:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1618362368' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 23 09:50:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:50:57 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 23 09:50:57 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 23 09:50:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1618362368' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  1: '-n'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  2: 'mgr.compute-0.nbdygh'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  3: '-f'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  4: '--setuser'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  5: 'ceph'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  6: '--setgroup'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  7: 'ceph'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  8: '--default-log-to-file=false'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  9: '--default-log-to-journald=true'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr respawn  exe_path /proc/self/exe
Jan 23 09:50:58 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:58 compute-0 ceph-mon[74335]: Deploying daemon crash.compute-2 on compute-2
Jan 23 09:50:58 compute-0 ceph-mon[74335]: pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:50:58 compute-0 ceph-mon[74335]: 6.1a scrub starts
Jan 23 09:50:58 compute-0 ceph-mon[74335]: 6.1a scrub ok
Jan 23 09:50:58 compute-0 systemd[1]: libpod-c114f802ad2711cee8f87e071e443e90564b38a3f9b60694b9e9cd1977ac7e48.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 podman[87256]: 2026-01-23 09:50:58.2886974 +0000 UTC m=+1.729393744 container died c114f802ad2711cee8f87e071e443e90564b38a3f9b60694b9e9cd1977ac7e48 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 09:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b47f770c17097fd38c3e5305a175a21aa9f2fa9c55e902f85123b9c5803c6cd9-merged.mount: Deactivated successfully.
Jan 23 09:50:58 compute-0 podman[87256]: 2026-01-23 09:50:58.329120014 +0000 UTC m=+1.769816338 container remove c114f802ad2711cee8f87e071e443e90564b38a3f9b60694b9e9cd1977ac7e48 (image=quay.io/ceph/ceph:v19, name=gallant_solomon, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:50:58 compute-0 sshd-session[75687]: Connection closed by 192.168.122.100 port 41982
Jan 23 09:50:58 compute-0 sshd-session[75975]: Connection closed by 192.168.122.100 port 49382
Jan 23 09:50:58 compute-0 sshd-session[75946]: Connection closed by 192.168.122.100 port 49376
Jan 23 09:50:58 compute-0 sshd-session[75919]: Connection closed by 192.168.122.100 port 49362
Jan 23 09:50:58 compute-0 sshd-session[75745]: Connection closed by 192.168.122.100 port 42002
Jan 23 09:50:58 compute-0 sshd-session[75803]: Connection closed by 192.168.122.100 port 42014
Jan 23 09:50:58 compute-0 sshd-session[75890]: Connection closed by 192.168.122.100 port 42038
Jan 23 09:50:58 compute-0 sshd-session[75716]: Connection closed by 192.168.122.100 port 41998
Jan 23 09:50:58 compute-0 sshd-session[75861]: Connection closed by 192.168.122.100 port 42032
Jan 23 09:50:58 compute-0 sshd-session[75832]: Connection closed by 192.168.122.100 port 42022
Jan 23 09:50:58 compute-0 sshd-session[75774]: Connection closed by 192.168.122.100 port 42006
Jan 23 09:50:58 compute-0 sshd-session[75686]: Connection closed by 192.168.122.100 port 41980
Jan 23 09:50:58 compute-0 sshd-session[75771]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 sshd-session[75943]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 sshd-session[75681]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 systemd[1]: libpod-conmon-c114f802ad2711cee8f87e071e443e90564b38a3f9b60694b9e9cd1977ac7e48.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 sshd-session[75829]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 sshd-session[75858]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 sshd-session[75972]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 sshd-session[75800]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 23 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd[1]: session-33.scope: Consumed 19.458s CPU time.
Jan 23 09:50:58 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 32 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 26 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 29 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 33 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 27 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 sudo[87253]: pam_unix(sudo:session): session closed for user root
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 28 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 sshd-session[75887]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 sshd-session[75742]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 sshd-session[75916]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 sshd-session[75664]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 sshd-session[75713]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:50:58 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 26.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 30 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 25 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 31 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 24 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Session 21 logged out. Waiting for processes to exit.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 28.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 23.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 32.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 33.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 27.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 29.
Jan 23 09:50:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setuser ceph since I am not root
Jan 23 09:50:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setgroup ceph since I am not root
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 30.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 21.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 25.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 24.
Jan 23 09:50:58 compute-0 systemd-logind[784]: Removed session 31.
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: pidfile_write: ignore empty --pid-file
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'alerts'
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'balancer'
Jan 23 09:50:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:50:58.540+0000 7f9fb9870140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:50:58 compute-0 sudo[87350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnmtlelwhbylsycdmihjaxunujjewktl ; /usr/bin/python3'
Jan 23 09:50:58 compute-0 sudo[87350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:50:58 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'cephadm'
Jan 23 09:50:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:50:58.645+0000 7f9fb9870140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:50:58 compute-0 python3[87352]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:50:58 compute-0 podman[87353]: 2026-01-23 09:50:58.826860592 +0000 UTC m=+0.047610028 container create c43caefbd7f105fc68e61882ebd3f67bc4559910024bcfcef8d6277d47189a1b (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:58 compute-0 systemd[1]: Started libpod-conmon-c43caefbd7f105fc68e61882ebd3f67bc4559910024bcfcef8d6277d47189a1b.scope.
Jan 23 09:50:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad159329ed8e4223bda5c1cd0c9b2634717d05b4f9ba939e32800a23de274049/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:58 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Jan 23 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad159329ed8e4223bda5c1cd0c9b2634717d05b4f9ba939e32800a23de274049/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:58 compute-0 podman[87353]: 2026-01-23 09:50:58.806405979 +0000 UTC m=+0.027155435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad159329ed8e4223bda5c1cd0c9b2634717d05b4f9ba939e32800a23de274049/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:50:58 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Jan 23 09:50:58 compute-0 podman[87353]: 2026-01-23 09:50:58.917670244 +0000 UTC m=+0.138419700 container init c43caefbd7f105fc68e61882ebd3f67bc4559910024bcfcef8d6277d47189a1b (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:50:58 compute-0 podman[87353]: 2026-01-23 09:50:58.923413721 +0000 UTC m=+0.144163157 container start c43caefbd7f105fc68e61882ebd3f67bc4559910024bcfcef8d6277d47189a1b (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 09:50:58 compute-0 podman[87353]: 2026-01-23 09:50:58.92726376 +0000 UTC m=+0.148013196 container attach c43caefbd7f105fc68e61882ebd3f67bc4559910024bcfcef8d6277d47189a1b (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:50:59 compute-0 ceph-mon[74335]: 4.0 scrub starts
Jan 23 09:50:59 compute-0 ceph-mon[74335]: 4.0 scrub ok
Jan 23 09:50:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1618362368' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 23 09:50:59 compute-0 ceph-mon[74335]: mgrmap e11: compute-0.nbdygh(active, since 2m)
Jan 23 09:50:59 compute-0 ceph-mon[74335]: 5.1a deep-scrub starts
Jan 23 09:50:59 compute-0 ceph-mon[74335]: 5.1a deep-scrub ok
Jan 23 09:50:59 compute-0 ceph-mon[74335]: 4.7 deep-scrub starts
Jan 23 09:50:59 compute-0 ceph-mon[74335]: 4.7 deep-scrub ok
Jan 23 09:50:59 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'crash'
Jan 23 09:50:59 compute-0 ceph-mgr[74633]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:50:59 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'dashboard'
Jan 23 09:50:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:50:59.576+0000 7f9fb9870140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:50:59 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.0 deep-scrub starts
Jan 23 09:50:59 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.0 deep-scrub ok
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'devicehealth'
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:51:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:00.306+0000 7f9fb9870140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 09:51:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 09:51:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 09:51:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   from numpy import show_config as show_numpy_config
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:51:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:00.505+0000 7f9fb9870140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'influx'
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:51:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:00.592+0000 7f9fb9870140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'insights'
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'iostat'
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:51:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:00.744+0000 7f9fb9870140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:51:00 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'k8sevents'
Jan 23 09:51:00 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 23 09:51:00 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 23 09:51:01 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'localpool'
Jan 23 09:51:01 compute-0 ceph-mon[74335]: 4.1b scrub starts
Jan 23 09:51:01 compute-0 ceph-mon[74335]: 4.1b scrub ok
Jan 23 09:51:01 compute-0 ceph-mon[74335]: 3.0 deep-scrub starts
Jan 23 09:51:01 compute-0 ceph-mon[74335]: 3.0 deep-scrub ok
Jan 23 09:51:01 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 09:51:01 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mirroring'
Jan 23 09:51:01 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uczrot started
Jan 23 09:51:01 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'nfs'
Jan 23 09:51:01 compute-0 ceph-mgr[74633]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:51:01 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'orchestrator'
Jan 23 09:51:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:01.929+0000 7f9fb9870140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:51:01 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 23 09:51:01 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 23 09:51:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:02.185+0000 7f9fb9870140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 09:51:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:02.275+0000 7f9fb9870140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_support'
Jan 23 09:51:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:02.354+0000 7f9fb9870140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 09:51:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:02.445+0000 7f9fb9870140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'progress'
Jan 23 09:51:02 compute-0 ceph-mon[74335]: 5.1b scrub starts
Jan 23 09:51:02 compute-0 ceph-mon[74335]: 5.1b scrub ok
Jan 23 09:51:02 compute-0 ceph-mon[74335]: 5.6 scrub starts
Jan 23 09:51:02 compute-0 ceph-mon[74335]: 5.6 scrub ok
Jan 23 09:51:02 compute-0 ceph-mon[74335]: 3.1d scrub starts
Jan 23 09:51:02 compute-0 ceph-mon[74335]: 3.1d scrub ok
Jan 23 09:51:02 compute-0 ceph-mon[74335]: Standby manager daemon compute-2.uczrot started
Jan 23 09:51:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:02.530+0000 7f9fb9870140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'prometheus'
Jan 23 09:51:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:02.922+0000 7f9fb9870140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:51:02 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rbd_support'
Jan 23 09:51:02 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 23 09:51:02 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 23 09:51:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:03.041+0000 7f9fb9870140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:51:03 compute-0 ceph-mgr[74633]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:51:03 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'restful'
Jan 23 09:51:03 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rgw'
Jan 23 09:51:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:03.557+0000 7f9fb9870140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:51:03 compute-0 ceph-mgr[74633]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:51:03 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rook'
Jan 23 09:51:03 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.nbdygh(active, since 2m), standbys: compute-2.uczrot
Jan 23 09:51:03 compute-0 ceph-mon[74335]: 5.c scrub starts
Jan 23 09:51:03 compute-0 ceph-mon[74335]: 5.c scrub ok
Jan 23 09:51:03 compute-0 ceph-mon[74335]: 4.1a scrub starts
Jan 23 09:51:03 compute-0 ceph-mon[74335]: 4.1a scrub ok
Jan 23 09:51:03 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 23 09:51:03 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 23 09:51:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:04.207+0000 7f9fb9870140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'selftest'
Jan 23 09:51:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:04.297+0000 7f9fb9870140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'snap_schedule'
Jan 23 09:51:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:04.398+0000 7f9fb9870140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'stats'
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'status'
Jan 23 09:51:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:04.589+0000 7f9fb9870140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telegraf'
Jan 23 09:51:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:04.677+0000 7f9fb9870140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telemetry'
Jan 23 09:51:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:04.877+0000 7f9fb9870140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:51:04 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 09:51:04 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 23 09:51:04 compute-0 ceph-mon[74335]: 6.f scrub starts
Jan 23 09:51:04 compute-0 ceph-mon[74335]: 6.f scrub ok
Jan 23 09:51:04 compute-0 ceph-mon[74335]: 6.d scrub starts
Jan 23 09:51:04 compute-0 ceph-mon[74335]: 6.d scrub ok
Jan 23 09:51:04 compute-0 ceph-mon[74335]: mgrmap e12: compute-0.nbdygh(active, since 2m), standbys: compute-2.uczrot
Jan 23 09:51:04 compute-0 ceph-mon[74335]: 5.1c deep-scrub starts
Jan 23 09:51:04 compute-0 ceph-mon[74335]: 5.1c deep-scrub ok
Jan 23 09:51:04 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 23 09:51:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:05.146+0000 7f9fb9870140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:05 compute-0 ceph-mgr[74633]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:05 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'volumes'
Jan 23 09:51:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:05.473+0000 7f9fb9870140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:51:05 compute-0 ceph-mgr[74633]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:51:05 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'zabbix'
Jan 23 09:51:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:05.556+0000 7f9fb9870140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:51:05 compute-0 ceph-mgr[74633]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Active manager daemon compute-0.nbdygh restarted
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.nbdygh
Jan 23 09:51:05 compute-0 ceph-mgr[74633]: ms_deliver_dispatch: unhandled message 0x55cab703cd00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Jan 23 09:51:05 compute-0 ceph-mgr[74633]: mgr handle_mgr_map Activating!
Jan 23 09:51:05 compute-0 ceph-mgr[74633]: mgr handle_mgr_map I am now activating
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.nbdygh(active, starting, since 0.407511s), standbys: compute-2.uczrot
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.uczrot", "id": "compute-2.uczrot"} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uczrot", "id": "compute-2.uczrot"}]: dispatch
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e1 all = 1
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:51:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 23 09:51:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:51:05 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 23 09:51:05 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: balancer
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [balancer INFO root] Starting
Jan 23 09:51:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Manager daemon compute-0.nbdygh is now available
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:51:06
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: cephadm
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: crash
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: dashboard
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: devicehealth
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Starting
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: iostat
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO sso] Loading SSO DB version=1
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: nfs
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: orchestrator
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: pg_autoscaler
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: progress
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [progress INFO root] Loading...
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f9f3d40cd90>, <progress.module.GhostEvent object at 0x7f9f3d417040>, <progress.module.GhostEvent object at 0x7f9f3d417070>, <progress.module.GhostEvent object at 0x7f9f3d4170a0>, <progress.module.GhostEvent object at 0x7f9f3d4170d0>, <progress.module.GhostEvent object at 0x7f9f3d417100>, <progress.module.GhostEvent object at 0x7f9f3d417130>, <progress.module.GhostEvent object at 0x7f9f3d417160>, <progress.module.GhostEvent object at 0x7f9f3d417190>, <progress.module.GhostEvent object at 0x7f9f3d4171c0>] historic events
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded OSDMap, ready.
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] recovery thread starting
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] starting setup
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: rbd_support
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: restful
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: status
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [restful INFO root] server_addr: :: server_port: 8003
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [restful WARNING root] server not running: no certificate configured
Jan 23 09:51:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"} v 0)
Jan 23 09:51:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: telemetry
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] PerfHandler: starting
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TaskHandler: starting
Jan 23 09:51:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"} v 0)
Jan 23 09:51:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [rbd_support INFO root] setup complete
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: volumes
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 23 09:51:06 compute-0 sshd-session[87526]: Accepted publickey for ceph-admin from 192.168.122.100 port 35770 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:51:06 compute-0 systemd-logind[784]: New session 34 of user ceph-admin.
Jan 23 09:51:06 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Jan 23 09:51:06 compute-0 sshd-session[87526]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:51:06 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.module] Engine started.
Jan 23 09:51:06 compute-0 sudo[87536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:06 compute-0 sudo[87536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:06 compute-0 sudo[87536]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:06 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Jan 23 09:51:06 compute-0 sudo[87561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 09:51:06 compute-0 sudo[87561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:06 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Jan 23 09:51:07 compute-0 ceph-mon[74335]: 3.b scrub starts
Jan 23 09:51:07 compute-0 ceph-mon[74335]: 3.b scrub ok
Jan 23 09:51:07 compute-0 ceph-mon[74335]: 5.d scrub starts
Jan 23 09:51:07 compute-0 ceph-mon[74335]: 5.d scrub ok
Jan 23 09:51:07 compute-0 ceph-mon[74335]: 6.e scrub starts
Jan 23 09:51:07 compute-0 ceph-mon[74335]: 6.e scrub ok
Jan 23 09:51:07 compute-0 ceph-mon[74335]: Active manager daemon compute-0.nbdygh restarted
Jan 23 09:51:07 compute-0 ceph-mon[74335]: Activating manager daemon compute-0.nbdygh
Jan 23 09:51:07 compute-0 ceph-mon[74335]: osdmap e32: 2 total, 2 up, 2 in
Jan 23 09:51:07 compute-0 ceph-mon[74335]: mgrmap e13: compute-0.nbdygh(active, starting, since 0.407511s), standbys: compute-2.uczrot
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uczrot", "id": "compute-2.uczrot"}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:51:07 compute-0 podman[87653]: 2026-01-23 09:51:07.541460664 +0000 UTC m=+0.064969683 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 09:51:07 compute-0 podman[87653]: 2026-01-23 09:51:07.644114129 +0000 UTC m=+0.167623138 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14274 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:07 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.nbdygh(active, since 2s), standbys: compute-2.uczrot
Jan 23 09:51:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:07 compute-0 youthful_antonelli[87369]: Option GRAFANA_API_USERNAME updated
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:07] ENGINE Bus STARTING
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:07] ENGINE Bus STARTING
Jan 23 09:51:07 compute-0 systemd[1]: libpod-c43caefbd7f105fc68e61882ebd3f67bc4559910024bcfcef8d6277d47189a1b.scope: Deactivated successfully.
Jan 23 09:51:07 compute-0 podman[87353]: 2026-01-23 09:51:07.710765443 +0000 UTC m=+8.931514879 container died c43caefbd7f105fc68e61882ebd3f67bc4559910024bcfcef8d6277d47189a1b (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 09:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad159329ed8e4223bda5c1cd0c9b2634717d05b4f9ba939e32800a23de274049-merged.mount: Deactivated successfully.
Jan 23 09:51:07 compute-0 podman[87353]: 2026-01-23 09:51:07.770633144 +0000 UTC m=+8.991382580 container remove c43caefbd7f105fc68e61882ebd3f67bc4559910024bcfcef8d6277d47189a1b (image=quay.io/ceph/ceph:v19, name=youthful_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:07 compute-0 systemd[1]: libpod-conmon-c43caefbd7f105fc68e61882ebd3f67bc4559910024bcfcef8d6277d47189a1b.scope: Deactivated successfully.
Jan 23 09:51:07 compute-0 sudo[87350]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:07] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:07] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:51:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:07] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:07] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:07] ENGINE Bus STARTED
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:07] ENGINE Bus STARTED
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:07] ENGINE Client ('192.168.122.100', 55612) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:07] ENGINE Client ('192.168.122.100', 55612) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:51:07 compute-0 sudo[87786]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqukcoimrwfgdugergurrrerkkqfkkmr ; /usr/bin/python3'
Jan 23 09:51:07 compute-0 sudo[87786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:07 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Jan 23 09:51:07 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Jan 23 09:51:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v4: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:08 compute-0 sudo[87561]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:51:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:51:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:51:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:51:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:51:08 compute-0 python3[87796]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Jan 23 09:51:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:51:08 compute-0 sudo[87800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:08 compute-0 sudo[87800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:08 compute-0 sudo[87800]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:08 compute-0 podman[87806]: 2026-01-23 09:51:08.14821024 +0000 UTC m=+0.041064651 container create b824b54d30e96c30d26fd689c789c295fcce36182b2f1144ff0fdd63edd563f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:08 compute-0 systemd[1]: Started libpod-conmon-b824b54d30e96c30d26fd689c789c295fcce36182b2f1144ff0fdd63edd563f4.scope.
Jan 23 09:51:08 compute-0 sudo[87838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 09:51:08 compute-0 sudo[87838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506441b231232613513710e82f9701379bb75d79a8bbff2d62e298df98862d8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506441b231232613513710e82f9701379bb75d79a8bbff2d62e298df98862d8e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506441b231232613513710e82f9701379bb75d79a8bbff2d62e298df98862d8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:08 compute-0 podman[87806]: 2026-01-23 09:51:08.129625115 +0000 UTC m=+0.022479546 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:08 compute-0 podman[87806]: 2026-01-23 09:51:08.262086472 +0000 UTC m=+0.154940913 container init b824b54d30e96c30d26fd689c789c295fcce36182b2f1144ff0fdd63edd563f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:08 compute-0 podman[87806]: 2026-01-23 09:51:08.2760894 +0000 UTC m=+0.168943811 container start b824b54d30e96c30d26fd689c789c295fcce36182b2f1144ff0fdd63edd563f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:51:08 compute-0 podman[87806]: 2026-01-23 09:51:08.280647427 +0000 UTC m=+0.173501868 container attach b824b54d30e96c30d26fd689c789c295fcce36182b2f1144ff0fdd63edd563f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_lamport, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:51:08 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Check health
Jan 23 09:51:08 compute-0 ceph-mon[74335]: 5.a scrub starts
Jan 23 09:51:08 compute-0 ceph-mon[74335]: 5.a scrub ok
Jan 23 09:51:08 compute-0 ceph-mon[74335]: Manager daemon compute-0.nbdygh is now available
Jan 23 09:51:08 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:51:08 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:51:08 compute-0 ceph-mon[74335]: 4.c deep-scrub starts
Jan 23 09:51:08 compute-0 ceph-mon[74335]: 4.c deep-scrub ok
Jan 23 09:51:08 compute-0 ceph-mon[74335]: 6.9 deep-scrub starts
Jan 23 09:51:08 compute-0 ceph-mon[74335]: 6.9 deep-scrub ok
Jan 23 09:51:08 compute-0 ceph-mon[74335]: 4.e scrub starts
Jan 23 09:51:08 compute-0 ceph-mon[74335]: 4.e scrub ok
Jan 23 09:51:08 compute-0 ceph-mon[74335]: mgrmap e14: compute-0.nbdygh(active, since 2s), standbys: compute-2.uczrot
Jan 23 09:51:08 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:08 compute-0 sudo[87838]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Jan 23 09:51:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:08 compute-0 sudo[87929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:08 compute-0 sudo[87929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:08 compute-0 sudo[87929]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:08 compute-0 sudo[87954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 23 09:51:08 compute-0 sudo[87954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:09 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 23 09:51:09 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:51:09 compute-0 sudo[87954]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.nbdygh(active, since 4s), standbys: compute-2.uczrot
Jan 23 09:51:09 compute-0 ecstatic_lamport[87863]: Option GRAFANA_API_PASSWORD updated
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.jmakme started
Jan 23 09:51:09 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:07] ENGINE Bus STARTING
Jan 23 09:51:09 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:07] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:51:09 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:07] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:51:09 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:07] ENGINE Bus STARTED
Jan 23 09:51:09 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:07] ENGINE Client ('192.168.122.100', 55612) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:51:09 compute-0 ceph-mon[74335]: 4.b deep-scrub starts
Jan 23 09:51:09 compute-0 ceph-mon[74335]: 4.b deep-scrub ok
Jan 23 09:51:09 compute-0 ceph-mon[74335]: pgmap v4: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:09 compute-0 ceph-mon[74335]: 5.f scrub starts
Jan 23 09:51:09 compute-0 ceph-mon[74335]: 5.f scrub ok
Jan 23 09:51:09 compute-0 ceph-mon[74335]: from='client.14304 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:09 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:51:09 compute-0 systemd[1]: libpod-b824b54d30e96c30d26fd689c789c295fcce36182b2f1144ff0fdd63edd563f4.scope: Deactivated successfully.
Jan 23 09:51:09 compute-0 podman[87806]: 2026-01-23 09:51:09.868073391 +0000 UTC m=+1.760927802 container died b824b54d30e96c30d26fd689c789c295fcce36182b2f1144ff0fdd63edd563f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-506441b231232613513710e82f9701379bb75d79a8bbff2d62e298df98862d8e-merged.mount: Deactivated successfully.
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:09 compute-0 podman[87806]: 2026-01-23 09:51:09.913916234 +0000 UTC m=+1.806770645 container remove b824b54d30e96c30d26fd689c789c295fcce36182b2f1144ff0fdd63edd563f4 (image=quay.io/ceph/ceph:v19, name=ecstatic_lamport, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 23 09:51:09 compute-0 systemd[1]: libpod-conmon-b824b54d30e96c30d26fd689c789c295fcce36182b2f1144ff0fdd63edd563f4.scope: Deactivated successfully.
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:51:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:51:09 compute-0 sudo[87786]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:51:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:09 compute-0 sudo[88011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 09:51:09 compute-0 sudo[88011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:09 compute-0 sudo[88011]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Jan 23 09:51:10 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Jan 23 09:51:10 compute-0 sudo[88036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph
Jan 23 09:51:10 compute-0 sudo[88036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88036]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 sudo[88061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:51:10 compute-0 sudo[88061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88061]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 sudo[88107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqshmcfaelnxhbbvzaknxngqxlthvocf ; /usr/bin/python3'
Jan 23 09:51:10 compute-0 sudo[88107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:10 compute-0 sudo[88112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:51:10 compute-0 sudo[88112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88112]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 sudo[88137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:51:10 compute-0 sudo[88137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88137]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 python3[88111]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:10 compute-0 podman[88174]: 2026-01-23 09:51:10.31692528 +0000 UTC m=+0.043239217 container create 2ea667cbca0f4a1587205a810219bca4af7e407569e05b1951cc754a73661cae (image=quay.io/ceph/ceph:v19, name=exciting_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:10 compute-0 systemd[1]: Started libpod-conmon-2ea667cbca0f4a1587205a810219bca4af7e407569e05b1951cc754a73661cae.scope.
Jan 23 09:51:10 compute-0 sudo[88195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:51:10 compute-0 sudo[88195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88195]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b609fbec41c48b0ea3acb258e84d334c26aa4c73d45166debeaee4f4bc0e08c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b609fbec41c48b0ea3acb258e84d334c26aa4c73d45166debeaee4f4bc0e08c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b609fbec41c48b0ea3acb258e84d334c26aa4c73d45166debeaee4f4bc0e08c2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:10 compute-0 podman[88174]: 2026-01-23 09:51:10.390780178 +0000 UTC m=+0.117094135 container init 2ea667cbca0f4a1587205a810219bca4af7e407569e05b1951cc754a73661cae (image=quay.io/ceph/ceph:v19, name=exciting_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:51:10 compute-0 podman[88174]: 2026-01-23 09:51:10.299618427 +0000 UTC m=+0.025932384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:10 compute-0 podman[88174]: 2026-01-23 09:51:10.396686969 +0000 UTC m=+0.123000906 container start 2ea667cbca0f4a1587205a810219bca4af7e407569e05b1951cc754a73661cae (image=quay.io/ceph/ceph:v19, name=exciting_jones, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:51:10 compute-0 podman[88174]: 2026-01-23 09:51:10.401475742 +0000 UTC m=+0.127789699 container attach 2ea667cbca0f4a1587205a810219bca4af7e407569e05b1951cc754a73661cae (image=quay.io/ceph/ceph:v19, name=exciting_jones, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:10 compute-0 sudo[88225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:51:10 compute-0 sudo[88225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88225]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:10 compute-0 sudo[88251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 23 09:51:10 compute-0 sudo[88251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88251]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:10 compute-0 sudo[88278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:51:10 compute-0 sudo[88278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88278]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:10 compute-0 sudo[88320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:51:10 compute-0 sudo[88320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mgr.compute-1.jmakme 192.168.122.101:0/1296895778; not ready for session (expect reconnect)
Jan 23 09:51:10 compute-0 sudo[88320]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uczrot restarted
Jan 23 09:51:10 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uczrot started
Jan 23 09:51:10 compute-0 sudo[88345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:51:10 compute-0 sudo[88345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88345]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 sudo[88370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:51:10 compute-0 sudo[88370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88370]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 sudo[88395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:51:10 compute-0 sudo[88395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88395]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14310 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Jan 23 09:51:10 compute-0 sudo[88444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:51:10 compute-0 sudo[88444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88444]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:10 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:10 compute-0 sudo[88469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:51:10 compute-0 sudo[88469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:10 compute-0 sudo[88469]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:11 compute-0 sudo[88494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 23 09:51:11 compute-0 sudo[88494]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:11 compute-0 sudo[88519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 09:51:11 compute-0 sudo[88519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88519]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph
Jan 23 09:51:11 compute-0 sudo[88544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88544]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:51:11 compute-0 sudo[88569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88569]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:51:11 compute-0 sudo[88594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88594]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:11 compute-0 sudo[88619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:51:11 compute-0 sudo[88619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88619]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:11 compute-0 sudo[88667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:51:11 compute-0 sudo[88667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88667]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:51:11 compute-0 sudo[88692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88692]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:11 compute-0 sudo[88717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88717]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mgr.compute-1.jmakme 192.168.122.101:0/1296895778; not ready for session (expect reconnect)
Jan 23 09:51:11 compute-0 sudo[88742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:51:11 compute-0 sudo[88742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88742]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:51:11 compute-0 sudo[88767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88767]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:51:11 compute-0 sudo[88792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88792]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:51:11 compute-0 sudo[88817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88817]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 sudo[88842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:51:11 compute-0 sudo[88842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:11 compute-0 sudo[88842]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:12 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 23 09:51:12 compute-0 sudo[88890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:51:12 compute-0 sudo[88890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:12 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 23 09:51:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:51:12 compute-0 sudo[88890]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:12 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:12 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:12 compute-0 sudo[88915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:51:12 compute-0 sudo[88915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:12 compute-0 sudo[88915]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:12 compute-0 sudo[88940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:12 compute-0 sudo[88940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:12 compute-0 sudo[88940]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:51:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:12 compute-0 exciting_jones[88221]: Option ALERTMANAGER_API_HOST updated
Jan 23 09:51:12 compute-0 systemd[1]: libpod-2ea667cbca0f4a1587205a810219bca4af7e407569e05b1951cc754a73661cae.scope: Deactivated successfully.
Jan 23 09:51:12 compute-0 podman[88174]: 2026-01-23 09:51:12.621932884 +0000 UTC m=+2.348246831 container died 2ea667cbca0f4a1587205a810219bca4af7e407569e05b1951cc754a73661cae (image=quay.io/ceph/ceph:v19, name=exciting_jones, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 09:51:12 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mgr.compute-1.jmakme 192.168.122.101:0/1296895778; not ready for session (expect reconnect)
Jan 23 09:51:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:51:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:12 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 23 09:51:12 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 23 09:51:13 compute-0 ceph-mon[74335]: 5.b scrub starts
Jan 23 09:51:13 compute-0 ceph-mon[74335]: 5.b scrub ok
Jan 23 09:51:13 compute-0 ceph-mon[74335]: 5.e scrub starts
Jan 23 09:51:13 compute-0 ceph-mon[74335]: 5.e scrub ok
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: mgrmap e15: compute-0.nbdygh(active, since 4s), standbys: compute-2.uczrot
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Standby manager daemon compute-1.jmakme started
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Adjusting osd_memory_target on compute-1 to 127.9M
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:13 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:51:13 compute-0 ceph-mon[74335]: pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:13 compute-0 ceph-mon[74335]: 3.1a scrub starts
Jan 23 09:51:13 compute-0 ceph-mon[74335]: 3.1a scrub ok
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Standby manager daemon compute-2.uczrot restarted
Jan 23 09:51:13 compute-0 ceph-mon[74335]: Standby manager daemon compute-2.uczrot started
Jan 23 09:51:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.nbdygh(active, since 7s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.jmakme", "id": "compute-1.jmakme"} v 0)
Jan 23 09:51:13 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jmakme", "id": "compute-1.jmakme"}]: dispatch
Jan 23 09:51:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b609fbec41c48b0ea3acb258e84d334c26aa4c73d45166debeaee4f4bc0e08c2-merged.mount: Deactivated successfully.
Jan 23 09:51:13 compute-0 podman[88174]: 2026-01-23 09:51:13.233026791 +0000 UTC m=+2.959340728 container remove 2ea667cbca0f4a1587205a810219bca4af7e407569e05b1951cc754a73661cae (image=quay.io/ceph/ceph:v19, name=exciting_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Jan 23 09:51:13 compute-0 sudo[88107]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:13 compute-0 systemd[1]: libpod-conmon-2ea667cbca0f4a1587205a810219bca4af7e407569e05b1951cc754a73661cae.scope: Deactivated successfully.
Jan 23 09:51:13 compute-0 sudo[89001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvcyltvwvfdebubjezefnrrdlnlcdasu ; /usr/bin/python3'
Jan 23 09:51:13 compute-0 sudo[89001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:51:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:51:13 compute-0 python3[89003]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 podman[89004]: 2026-01-23 09:51:13.614328042 +0000 UTC m=+0.053653833 container create aa21b7520b727582c212dc60e7d81b5b7cf35db68540c5b234d84dfbc351dc13 (image=quay.io/ceph/ceph:v19, name=intelligent_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:51:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:51:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:13 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 71dc87cb-17f1-449c-ba25-9b4258bb2897 (Updating node-exporter deployment (+3 -> 3))
Jan 23 09:51:13 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Jan 23 09:51:13 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Jan 23 09:51:13 compute-0 systemd[1]: Started libpod-conmon-aa21b7520b727582c212dc60e7d81b5b7cf35db68540c5b234d84dfbc351dc13.scope.
Jan 23 09:51:13 compute-0 podman[89004]: 2026-01-23 09:51:13.585174736 +0000 UTC m=+0.024500547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a94210a206c224b16b9fd6963f02135585c42efdd5eb975fd2a4e735d90a9266/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a94210a206c224b16b9fd6963f02135585c42efdd5eb975fd2a4e735d90a9266/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a94210a206c224b16b9fd6963f02135585c42efdd5eb975fd2a4e735d90a9266/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:13 compute-0 sudo[89019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:13 compute-0 podman[89004]: 2026-01-23 09:51:13.708170642 +0000 UTC m=+0.147496453 container init aa21b7520b727582c212dc60e7d81b5b7cf35db68540c5b234d84dfbc351dc13 (image=quay.io/ceph/ceph:v19, name=intelligent_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 09:51:13 compute-0 sudo[89019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:13 compute-0 sudo[89019]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:13 compute-0 podman[89004]: 2026-01-23 09:51:13.716807913 +0000 UTC m=+0.156133704 container start aa21b7520b727582c212dc60e7d81b5b7cf35db68540c5b234d84dfbc351dc13 (image=quay.io/ceph/ceph:v19, name=intelligent_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:51:13 compute-0 podman[89004]: 2026-01-23 09:51:13.724636253 +0000 UTC m=+0.163962074 container attach aa21b7520b727582c212dc60e7d81b5b7cf35db68540c5b234d84dfbc351dc13 (image=quay.io/ceph/ceph:v19, name=intelligent_ellis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Jan 23 09:51:13 compute-0 sudo[89048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:51:13 compute-0 sudo[89048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:13 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 23 09:51:13 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 23 09:51:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v7: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 28 KiB/s rd, 0 B/s wr, 11 op/s
Jan 23 09:51:14 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Jan 23 09:51:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 intelligent_ellis[89023]: Option PROMETHEUS_API_HOST updated
Jan 23 09:51:14 compute-0 systemd[1]: Reloading.
Jan 23 09:51:14 compute-0 podman[89004]: 2026-01-23 09:51:14.141449602 +0000 UTC m=+0.580775393 container died aa21b7520b727582c212dc60e7d81b5b7cf35db68540c5b234d84dfbc351dc13 (image=quay.io/ceph/ceph:v19, name=intelligent_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 5.8 deep-scrub starts
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 5.8 deep-scrub ok
Jan 23 09:51:14 compute-0 ceph-mon[74335]: Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:14 compute-0 ceph-mon[74335]: Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:14 compute-0 ceph-mon[74335]: Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='client.14310 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:14 compute-0 ceph-mon[74335]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 6.b scrub starts
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 6.b scrub ok
Jan 23 09:51:14 compute-0 ceph-mon[74335]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:14 compute-0 ceph-mon[74335]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:14 compute-0 ceph-mon[74335]: Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 4.1 scrub starts
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 4.1 scrub ok
Jan 23 09:51:14 compute-0 ceph-mon[74335]: Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:14 compute-0 ceph-mon[74335]: pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 4.17 scrub starts
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 4.17 scrub ok
Jan 23 09:51:14 compute-0 ceph-mon[74335]: Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 6.3 scrub starts
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 6.3 scrub ok
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 4.16 scrub starts
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 4.16 scrub ok
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 ceph-mon[74335]: mgrmap e16: compute-0.nbdygh(active, since 7s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jmakme", "id": "compute-1.jmakme"}]: dispatch
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 6.2 scrub starts
Jan 23 09:51:14 compute-0 ceph-mon[74335]: 6.2 scrub ok
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:14 compute-0 systemd-rc-local-generator[89175]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:51:14 compute-0 systemd-sysv-generator[89178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:51:14 compute-0 systemd[1]: libpod-aa21b7520b727582c212dc60e7d81b5b7cf35db68540c5b234d84dfbc351dc13.scope: Deactivated successfully.
Jan 23 09:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a94210a206c224b16b9fd6963f02135585c42efdd5eb975fd2a4e735d90a9266-merged.mount: Deactivated successfully.
Jan 23 09:51:14 compute-0 podman[89004]: 2026-01-23 09:51:14.405979947 +0000 UTC m=+0.845305738 container remove aa21b7520b727582c212dc60e7d81b5b7cf35db68540c5b234d84dfbc351dc13 (image=quay.io/ceph/ceph:v19, name=intelligent_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 09:51:14 compute-0 systemd[1]: libpod-conmon-aa21b7520b727582c212dc60e7d81b5b7cf35db68540c5b234d84dfbc351dc13.scope: Deactivated successfully.
Jan 23 09:51:14 compute-0 sudo[89001]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:14 compute-0 systemd[1]: Reloading.
Jan 23 09:51:14 compute-0 systemd-sysv-generator[89216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:51:14 compute-0 systemd-rc-local-generator[89211]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:51:14 compute-0 sudo[89246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jchxlcchqurklxmzktpugvfbsgheoliq ; /usr/bin/python3'
Jan 23 09:51:14 compute-0 sudo[89246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:14 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:51:14 compute-0 python3[89250]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:14 compute-0 bash[89298]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Jan 23 09:51:14 compute-0 podman[89297]: 2026-01-23 09:51:14.951338132 +0000 UTC m=+0.069667453 container create b3bb9c5ff1bc586ee0e3f07b25643a452891c1db810dd2715c6c3477a7529c6b (image=quay.io/ceph/ceph:v19, name=cool_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:14 compute-0 systemd[1]: Started libpod-conmon-b3bb9c5ff1bc586ee0e3f07b25643a452891c1db810dd2715c6c3477a7529c6b.scope.
Jan 23 09:51:14 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Jan 23 09:51:15 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Jan 23 09:51:15 compute-0 podman[89297]: 2026-01-23 09:51:14.916741457 +0000 UTC m=+0.035070808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c66b2da0c8aad39039f1654b8d3fc2327acfdb5cae40d9bcc3351315cfc4b47/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c66b2da0c8aad39039f1654b8d3fc2327acfdb5cae40d9bcc3351315cfc4b47/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c66b2da0c8aad39039f1654b8d3fc2327acfdb5cae40d9bcc3351315cfc4b47/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:15 compute-0 podman[89297]: 2026-01-23 09:51:15.045204252 +0000 UTC m=+0.163533593 container init b3bb9c5ff1bc586ee0e3f07b25643a452891c1db810dd2715c6c3477a7529c6b (image=quay.io/ceph/ceph:v19, name=cool_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 09:51:15 compute-0 podman[89297]: 2026-01-23 09:51:15.051917574 +0000 UTC m=+0.170246905 container start b3bb9c5ff1bc586ee0e3f07b25643a452891c1db810dd2715c6c3477a7529c6b (image=quay.io/ceph/ceph:v19, name=cool_varahamihira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:51:15 compute-0 podman[89297]: 2026-01-23 09:51:15.055535186 +0000 UTC m=+0.173864587 container attach b3bb9c5ff1bc586ee0e3f07b25643a452891c1db810dd2715c6c3477a7529c6b (image=quay.io/ceph/ceph:v19, name=cool_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 09:51:15 compute-0 ceph-mon[74335]: Deploying daemon node-exporter.compute-0 on compute-0
Jan 23 09:51:15 compute-0 ceph-mon[74335]: 5.17 scrub starts
Jan 23 09:51:15 compute-0 ceph-mon[74335]: 5.17 scrub ok
Jan 23 09:51:15 compute-0 ceph-mon[74335]: pgmap v7: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 28 KiB/s rd, 0 B/s wr, 11 op/s
Jan 23 09:51:15 compute-0 ceph-mon[74335]: from='client.14316 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:15 compute-0 ceph-mon[74335]: 6.5 deep-scrub starts
Jan 23 09:51:15 compute-0 ceph-mon[74335]: 6.5 deep-scrub ok
Jan 23 09:51:15 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14322 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 23 09:51:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:15 compute-0 cool_varahamihira[89327]: Option GRAFANA_API_URL updated
Jan 23 09:51:15 compute-0 systemd[1]: libpod-b3bb9c5ff1bc586ee0e3f07b25643a452891c1db810dd2715c6c3477a7529c6b.scope: Deactivated successfully.
Jan 23 09:51:15 compute-0 bash[89298]: Getting image source signatures
Jan 23 09:51:15 compute-0 bash[89298]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Jan 23 09:51:15 compute-0 bash[89298]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Jan 23 09:51:15 compute-0 bash[89298]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Jan 23 09:51:15 compute-0 podman[89352]: 2026-01-23 09:51:15.891606737 +0000 UTC m=+0.396893701 container died b3bb9c5ff1bc586ee0e3f07b25643a452891c1db810dd2715c6c3477a7529c6b (image=quay.io/ceph/ceph:v19, name=cool_varahamihira, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 09:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c66b2da0c8aad39039f1654b8d3fc2327acfdb5cae40d9bcc3351315cfc4b47-merged.mount: Deactivated successfully.
Jan 23 09:51:15 compute-0 podman[89352]: 2026-01-23 09:51:15.934397691 +0000 UTC m=+0.439684645 container remove b3bb9c5ff1bc586ee0e3f07b25643a452891c1db810dd2715c6c3477a7529c6b (image=quay.io/ceph/ceph:v19, name=cool_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:15 compute-0 systemd[1]: libpod-conmon-b3bb9c5ff1bc586ee0e3f07b25643a452891c1db810dd2715c6c3477a7529c6b.scope: Deactivated successfully.
Jan 23 09:51:15 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 23 09:51:15 compute-0 sudo[89246]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:15 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 23 09:51:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v8: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 0 B/s wr, 8 op/s
Jan 23 09:51:16 compute-0 sudo[89435]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgxvbmzibsscaaaziddhtjfgprtibrwx ; /usr/bin/python3'
Jan 23 09:51:16 compute-0 sudo[89435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:16 compute-0 python3[89437]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:16 compute-0 bash[89298]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Jan 23 09:51:16 compute-0 bash[89298]: Writing manifest to image destination
Jan 23 09:51:16 compute-0 podman[89438]: 2026-01-23 09:51:16.457649672 +0000 UTC m=+0.172895732 container create c83777617c2f451787dfe320978b3fa8fe9fea4dcf7bed71e6796e75c3e61feb (image=quay.io/ceph/ceph:v19, name=gifted_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:51:16 compute-0 podman[89298]: 2026-01-23 09:51:16.493285173 +0000 UTC m=+1.615040121 container create 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:51:16 compute-0 ceph-mon[74335]: 6.14 scrub starts
Jan 23 09:51:16 compute-0 ceph-mon[74335]: 6.14 scrub ok
Jan 23 09:51:16 compute-0 ceph-mon[74335]: from='client.14322 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:16 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:16 compute-0 ceph-mon[74335]: 5.1 scrub starts
Jan 23 09:51:16 compute-0 ceph-mon[74335]: 5.1 scrub ok
Jan 23 09:51:16 compute-0 ceph-mon[74335]: pgmap v8: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 0 B/s wr, 8 op/s
Jan 23 09:51:16 compute-0 systemd[1]: Started libpod-conmon-c83777617c2f451787dfe320978b3fa8fe9fea4dcf7bed71e6796e75c3e61feb.scope.
Jan 23 09:51:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b91283fddcaae0b5ef0583c61cbf243449657dd4a58f8e7beb91c637cb4634/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b91283fddcaae0b5ef0583c61cbf243449657dd4a58f8e7beb91c637cb4634/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b91283fddcaae0b5ef0583c61cbf243449657dd4a58f8e7beb91c637cb4634/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:16 compute-0 podman[89438]: 2026-01-23 09:51:16.436165003 +0000 UTC m=+0.151411073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e632e93d849c0bb7e7946a5321ac39dd650986526d3ee8477ae82295cdb153/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:16 compute-0 podman[89438]: 2026-01-23 09:51:16.542213445 +0000 UTC m=+0.257459525 container init c83777617c2f451787dfe320978b3fa8fe9fea4dcf7bed71e6796e75c3e61feb (image=quay.io/ceph/ceph:v19, name=gifted_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:51:16 compute-0 podman[89298]: 2026-01-23 09:51:16.476969596 +0000 UTC m=+1.598724574 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Jan 23 09:51:16 compute-0 podman[89298]: 2026-01-23 09:51:16.548747442 +0000 UTC m=+1.670502420 container init 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:51:16 compute-0 podman[89438]: 2026-01-23 09:51:16.550672111 +0000 UTC m=+0.265918161 container start c83777617c2f451787dfe320978b3fa8fe9fea4dcf7bed71e6796e75c3e61feb (image=quay.io/ceph/ceph:v19, name=gifted_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:51:16 compute-0 podman[89438]: 2026-01-23 09:51:16.554186281 +0000 UTC m=+0.269432361 container attach c83777617c2f451787dfe320978b3fa8fe9fea4dcf7bed71e6796e75c3e61feb (image=quay.io/ceph/ceph:v19, name=gifted_morse, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 09:51:16 compute-0 podman[89298]: 2026-01-23 09:51:16.55651124 +0000 UTC m=+1.678266188 container start 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:51:16 compute-0 bash[89298]: 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.565Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.566Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.567Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.567Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.567Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=arp
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=bcache
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=bonding
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=cpu
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=dmi
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=edac
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=entropy
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=filefd
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=hwmon
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=netclass
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=netdev
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=netstat
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=nfs
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=nvme
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=os
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=pressure
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=rapl
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=selinux
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=softnet
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=stat
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=textfile
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=thermal_zone
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=time
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=uname
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=xfs
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.568Z caller=node_exporter.go:117 level=info collector=zfs
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.569Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Jan 23 09:51:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0[89471]: ts=2026-01-23T09:51:16.569Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Jan 23 09:51:16 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:51:16 compute-0 sudo[89048]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:51:16 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:51:16 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 23 09:51:16 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:16 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Jan 23 09:51:16 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Jan 23 09:51:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 23 09:51:16 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1110789864' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 23 09:51:16 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.16 deep-scrub starts
Jan 23 09:51:16 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.16 deep-scrub ok
Jan 23 09:51:17 compute-0 ceph-mon[74335]: 5.14 scrub starts
Jan 23 09:51:17 compute-0 ceph-mon[74335]: 5.14 scrub ok
Jan 23 09:51:17 compute-0 ceph-mon[74335]: 3.5 scrub starts
Jan 23 09:51:17 compute-0 ceph-mon[74335]: 3.5 scrub ok
Jan 23 09:51:17 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:17 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:17 compute-0 ceph-mon[74335]: from='mgr.14268 192.168.122.100:0/3010064577' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:17 compute-0 ceph-mon[74335]: Deploying daemon node-exporter.compute-1 on compute-1
Jan 23 09:51:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1110789864' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 23 09:51:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1110789864' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  1: '-n'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  2: 'mgr.compute-0.nbdygh'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  3: '-f'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  4: '--setuser'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  5: 'ceph'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  6: '--setgroup'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  7: 'ceph'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  8: '--default-log-to-file=false'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  9: '--default-log-to-journald=true'
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 23 09:51:17 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.nbdygh(active, since 12s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:17 compute-0 systemd[1]: libpod-c83777617c2f451787dfe320978b3fa8fe9fea4dcf7bed71e6796e75c3e61feb.scope: Deactivated successfully.
Jan 23 09:51:17 compute-0 podman[89438]: 2026-01-23 09:51:17.778975862 +0000 UTC m=+1.494221922 container died c83777617c2f451787dfe320978b3fa8fe9fea4dcf7bed71e6796e75c3e61feb (image=quay.io/ceph/ceph:v19, name=gifted_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-97b91283fddcaae0b5ef0583c61cbf243449657dd4a58f8e7beb91c637cb4634-merged.mount: Deactivated successfully.
Jan 23 09:51:17 compute-0 podman[89438]: 2026-01-23 09:51:17.827953724 +0000 UTC m=+1.543199774 container remove c83777617c2f451787dfe320978b3fa8fe9fea4dcf7bed71e6796e75c3e61feb (image=quay.io/ceph/ceph:v19, name=gifted_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 09:51:17 compute-0 systemd[1]: libpod-conmon-c83777617c2f451787dfe320978b3fa8fe9fea4dcf7bed71e6796e75c3e61feb.scope: Deactivated successfully.
Jan 23 09:51:17 compute-0 sshd-session[87534]: Connection closed by 192.168.122.100 port 35770
Jan 23 09:51:17 compute-0 sudo[89435]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:17 compute-0 sshd-session[87526]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:51:17 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 23 09:51:17 compute-0 systemd[1]: session-34.scope: Consumed 5.318s CPU time.
Jan 23 09:51:17 compute-0 systemd-logind[784]: Session 34 logged out. Waiting for processes to exit.
Jan 23 09:51:17 compute-0 systemd-logind[784]: Removed session 34.
Jan 23 09:51:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setuser ceph since I am not root
Jan 23 09:51:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setgroup ceph since I am not root
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: pidfile_write: ignore empty --pid-file
Jan 23 09:51:17 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'alerts'
Jan 23 09:51:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:17 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 23 09:51:17 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 23 09:51:18 compute-0 sudo[89558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkfrfwivfsdslsrmcfovwtiuafegptww ; /usr/bin/python3'
Jan 23 09:51:18 compute-0 sudo[89558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:18.030+0000 7f11aa40f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:51:18 compute-0 ceph-mgr[74633]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:51:18 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'balancer'
Jan 23 09:51:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:18.136+0000 7f11aa40f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:51:18 compute-0 ceph-mgr[74633]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:51:18 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'cephadm'
Jan 23 09:51:18 compute-0 python3[89560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:18 compute-0 podman[89561]: 2026-01-23 09:51:18.213711598 +0000 UTC m=+0.040331882 container create 37a0717f85ef39ac76fb547bd96059f1cba6da4dc981df68d59520f75d8750af (image=quay.io/ceph/ceph:v19, name=intelligent_lichterman, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:51:18 compute-0 systemd[1]: Started libpod-conmon-37a0717f85ef39ac76fb547bd96059f1cba6da4dc981df68d59520f75d8750af.scope.
Jan 23 09:51:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59513c17d6c272699c2e8c5a9855a22de095dd1309070191634c10a535596182/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59513c17d6c272699c2e8c5a9855a22de095dd1309070191634c10a535596182/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59513c17d6c272699c2e8c5a9855a22de095dd1309070191634c10a535596182/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:18 compute-0 podman[89561]: 2026-01-23 09:51:18.198314544 +0000 UTC m=+0.024934858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:18 compute-0 podman[89561]: 2026-01-23 09:51:18.304372417 +0000 UTC m=+0.130992721 container init 37a0717f85ef39ac76fb547bd96059f1cba6da4dc981df68d59520f75d8750af (image=quay.io/ceph/ceph:v19, name=intelligent_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:18 compute-0 podman[89561]: 2026-01-23 09:51:18.309850797 +0000 UTC m=+0.136471091 container start 37a0717f85ef39ac76fb547bd96059f1cba6da4dc981df68d59520f75d8750af (image=quay.io/ceph/ceph:v19, name=intelligent_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:51:18 compute-0 podman[89561]: 2026-01-23 09:51:18.313229343 +0000 UTC m=+0.139849627 container attach 37a0717f85ef39ac76fb547bd96059f1cba6da4dc981df68d59520f75d8750af (image=quay.io/ceph/ceph:v19, name=intelligent_lichterman, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:51:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 23 09:51:18 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/435334493' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 23 09:51:18 compute-0 ceph-mon[74335]: 6.16 deep-scrub starts
Jan 23 09:51:18 compute-0 ceph-mon[74335]: 6.16 deep-scrub ok
Jan 23 09:51:18 compute-0 ceph-mon[74335]: 5.2 scrub starts
Jan 23 09:51:18 compute-0 ceph-mon[74335]: 5.2 scrub ok
Jan 23 09:51:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1110789864' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 23 09:51:18 compute-0 ceph-mon[74335]: mgrmap e17: compute-0.nbdygh(active, since 12s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/435334493' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 23 09:51:18 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/435334493' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 23 09:51:18 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.nbdygh(active, since 13s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:18 compute-0 systemd[1]: libpod-37a0717f85ef39ac76fb547bd96059f1cba6da4dc981df68d59520f75d8750af.scope: Deactivated successfully.
Jan 23 09:51:18 compute-0 podman[89561]: 2026-01-23 09:51:18.819042078 +0000 UTC m=+0.645662362 container died 37a0717f85ef39ac76fb547bd96059f1cba6da4dc981df68d59520f75d8750af (image=quay.io/ceph/ceph:v19, name=intelligent_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-59513c17d6c272699c2e8c5a9855a22de095dd1309070191634c10a535596182-merged.mount: Deactivated successfully.
Jan 23 09:51:18 compute-0 podman[89561]: 2026-01-23 09:51:18.864627414 +0000 UTC m=+0.691247698 container remove 37a0717f85ef39ac76fb547bd96059f1cba6da4dc981df68d59520f75d8750af (image=quay.io/ceph/ceph:v19, name=intelligent_lichterman, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 09:51:18 compute-0 systemd[1]: libpod-conmon-37a0717f85ef39ac76fb547bd96059f1cba6da4dc981df68d59520f75d8750af.scope: Deactivated successfully.
Jan 23 09:51:18 compute-0 sudo[89558]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:18 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.11 deep-scrub starts
Jan 23 09:51:18 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.11 deep-scrub ok
Jan 23 09:51:18 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'crash'
Jan 23 09:51:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:19.082+0000 7f11aa40f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:51:19 compute-0 ceph-mgr[74633]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:51:19 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'dashboard'
Jan 23 09:51:19 compute-0 python3[89699]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:51:19 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'devicehealth'
Jan 23 09:51:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:19.850+0000 7f11aa40f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:51:19 compute-0 ceph-mgr[74633]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:51:19 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 09:51:19 compute-0 ceph-mon[74335]: 5.12 scrub starts
Jan 23 09:51:19 compute-0 ceph-mon[74335]: 5.12 scrub ok
Jan 23 09:51:19 compute-0 ceph-mon[74335]: 3.3 scrub starts
Jan 23 09:51:19 compute-0 ceph-mon[74335]: 3.3 scrub ok
Jan 23 09:51:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/435334493' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 23 09:51:19 compute-0 ceph-mon[74335]: mgrmap e18: compute-0.nbdygh(active, since 13s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:19 compute-0 python3[89770]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769161879.3922276-37559-157204839420791/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:51:20 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 23 09:51:20 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 23 09:51:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 09:51:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 09:51:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   from numpy import show_config as show_numpy_config
Jan 23 09:51:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:20.061+0000 7f11aa40f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:51:20 compute-0 ceph-mgr[74633]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:51:20 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'influx'
Jan 23 09:51:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:20.134+0000 7f11aa40f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:51:20 compute-0 ceph-mgr[74633]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:51:20 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'insights'
Jan 23 09:51:20 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'iostat'
Jan 23 09:51:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:20.295+0000 7f11aa40f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:51:20 compute-0 ceph-mgr[74633]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:51:20 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'k8sevents'
Jan 23 09:51:20 compute-0 sudo[89818]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfsogjovahodagaveigaqlqvfvzlgztr ; /usr/bin/python3'
Jan 23 09:51:20 compute-0 sudo[89818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:20 compute-0 python3[89820]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:20 compute-0 podman[89821]: 2026-01-23 09:51:20.606931249 +0000 UTC m=+0.088071923 container create 7c2dd66bf8714f691a54cf3d644f880211d55f0ffbf5c432545af8a405d635b8 (image=quay.io/ceph/ceph:v19, name=happy_ride, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:51:20 compute-0 systemd[1]: Started libpod-conmon-7c2dd66bf8714f691a54cf3d644f880211d55f0ffbf5c432545af8a405d635b8.scope.
Jan 23 09:51:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63f57f31ef38c4e9073ce2a5a77beb9709f9bbc0999951dbe25d3e2e1394997/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63f57f31ef38c4e9073ce2a5a77beb9709f9bbc0999951dbe25d3e2e1394997/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63f57f31ef38c4e9073ce2a5a77beb9709f9bbc0999951dbe25d3e2e1394997/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:20 compute-0 podman[89821]: 2026-01-23 09:51:20.588760634 +0000 UTC m=+0.069901328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:20 compute-0 podman[89821]: 2026-01-23 09:51:20.686073913 +0000 UTC m=+0.167214607 container init 7c2dd66bf8714f691a54cf3d644f880211d55f0ffbf5c432545af8a405d635b8 (image=quay.io/ceph/ceph:v19, name=happy_ride, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:20 compute-0 podman[89821]: 2026-01-23 09:51:20.69262243 +0000 UTC m=+0.173763094 container start 7c2dd66bf8714f691a54cf3d644f880211d55f0ffbf5c432545af8a405d635b8 (image=quay.io/ceph/ceph:v19, name=happy_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:51:20 compute-0 podman[89821]: 2026-01-23 09:51:20.695665938 +0000 UTC m=+0.176806632 container attach 7c2dd66bf8714f691a54cf3d644f880211d55f0ffbf5c432545af8a405d635b8 (image=quay.io/ceph/ceph:v19, name=happy_ride, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:51:20 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'localpool'
Jan 23 09:51:20 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 09:51:20 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Jan 23 09:51:20 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Jan 23 09:51:21 compute-0 ceph-mon[74335]: 6.11 deep-scrub starts
Jan 23 09:51:21 compute-0 ceph-mon[74335]: 6.11 deep-scrub ok
Jan 23 09:51:21 compute-0 ceph-mon[74335]: 4.5 scrub starts
Jan 23 09:51:21 compute-0 ceph-mon[74335]: 4.5 scrub ok
Jan 23 09:51:21 compute-0 ceph-mon[74335]: 5.4 deep-scrub starts
Jan 23 09:51:21 compute-0 ceph-mon[74335]: 5.4 deep-scrub ok
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mirroring'
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'nfs'
Jan 23 09:51:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:21.525+0000 7f11aa40f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'orchestrator'
Jan 23 09:51:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:21.762+0000 7f11aa40f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 09:51:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:21.851+0000 7f11aa40f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_support'
Jan 23 09:51:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:21.929+0000 7f11aa40f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:51:21 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 09:51:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:22.019+0000 7f11aa40f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:51:22 compute-0 ceph-mgr[74633]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:51:22 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'progress'
Jan 23 09:51:22 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Jan 23 09:51:22 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Jan 23 09:51:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:22.096+0000 7f11aa40f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:51:22 compute-0 ceph-mgr[74633]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:51:22 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'prometheus'
Jan 23 09:51:22 compute-0 ceph-mon[74335]: 5.13 scrub starts
Jan 23 09:51:22 compute-0 ceph-mon[74335]: 5.13 scrub ok
Jan 23 09:51:22 compute-0 ceph-mon[74335]: 6.10 scrub starts
Jan 23 09:51:22 compute-0 ceph-mon[74335]: 6.10 scrub ok
Jan 23 09:51:22 compute-0 ceph-mon[74335]: 3.9 scrub starts
Jan 23 09:51:22 compute-0 ceph-mon[74335]: 3.9 scrub ok
Jan 23 09:51:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:22.492+0000 7f11aa40f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:51:22 compute-0 ceph-mgr[74633]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:51:22 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rbd_support'
Jan 23 09:51:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:22.611+0000 7f11aa40f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:51:22 compute-0 ceph-mgr[74633]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:51:22 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'restful'
Jan 23 09:51:22 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rgw'
Jan 23 09:51:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:23 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 23 09:51:23 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 23 09:51:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:23.144+0000 7f11aa40f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:51:23 compute-0 ceph-mgr[74633]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:51:23 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rook'
Jan 23 09:51:23 compute-0 ceph-mon[74335]: 6.13 scrub starts
Jan 23 09:51:23 compute-0 ceph-mon[74335]: 6.13 scrub ok
Jan 23 09:51:23 compute-0 ceph-mon[74335]: 5.7 scrub starts
Jan 23 09:51:23 compute-0 ceph-mon[74335]: 5.7 scrub ok
Jan 23 09:51:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:23.920+0000 7f11aa40f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:51:23 compute-0 ceph-mgr[74633]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:51:23 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'selftest'
Jan 23 09:51:23 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 23 09:51:24 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 23 09:51:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:24.007+0000 7f11aa40f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'snap_schedule'
Jan 23 09:51:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:24.105+0000 7f11aa40f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'stats'
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'status'
Jan 23 09:51:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:24.282+0000 7f11aa40f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telegraf'
Jan 23 09:51:24 compute-0 ceph-mon[74335]: 5.1e scrub starts
Jan 23 09:51:24 compute-0 ceph-mon[74335]: 5.1e scrub ok
Jan 23 09:51:24 compute-0 ceph-mon[74335]: 6.8 scrub starts
Jan 23 09:51:24 compute-0 ceph-mon[74335]: 6.8 scrub ok
Jan 23 09:51:24 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.jmakme restarted
Jan 23 09:51:24 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.jmakme started
Jan 23 09:51:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:24.361+0000 7f11aa40f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telemetry'
Jan 23 09:51:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:24.535+0000 7f11aa40f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 09:51:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:24.784+0000 7f11aa40f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:24 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'volumes'
Jan 23 09:51:24 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uczrot restarted
Jan 23 09:51:24 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uczrot started
Jan 23 09:51:25 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 23 09:51:25 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 23 09:51:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:25.085+0000 7f11aa40f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'zabbix'
Jan 23 09:51:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:25.170+0000 7f11aa40f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:51:25 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Active manager daemon compute-0.nbdygh restarted
Jan 23 09:51:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 23 09:51:25 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.nbdygh
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: ms_deliver_dispatch: unhandled message 0x557bfc22b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  1: '-n'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  2: 'mgr.compute-0.nbdygh'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  3: '-f'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  4: '--setuser'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  5: 'ceph'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  6: '--setgroup'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  7: 'ceph'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  8: '--default-log-to-file=false'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  9: '--default-log-to-journald=true'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr respawn  exe_path /proc/self/exe
Jan 23 09:51:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Jan 23 09:51:25 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Jan 23 09:51:25 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.nbdygh(active, starting, since 0.060897s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setuser ceph since I am not root
Jan 23 09:51:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setgroup ceph since I am not root
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: pidfile_write: ignore empty --pid-file
Jan 23 09:51:25 compute-0 ceph-mon[74335]: 6.1d scrub starts
Jan 23 09:51:25 compute-0 ceph-mon[74335]: 6.1d scrub ok
Jan 23 09:51:25 compute-0 ceph-mon[74335]: Standby manager daemon compute-1.jmakme restarted
Jan 23 09:51:25 compute-0 ceph-mon[74335]: Standby manager daemon compute-1.jmakme started
Jan 23 09:51:25 compute-0 ceph-mon[74335]: 4.a scrub starts
Jan 23 09:51:25 compute-0 ceph-mon[74335]: 4.a scrub ok
Jan 23 09:51:25 compute-0 ceph-mon[74335]: Standby manager daemon compute-2.uczrot restarted
Jan 23 09:51:25 compute-0 ceph-mon[74335]: Standby manager daemon compute-2.uczrot started
Jan 23 09:51:25 compute-0 ceph-mon[74335]: Active manager daemon compute-0.nbdygh restarted
Jan 23 09:51:25 compute-0 ceph-mon[74335]: Activating manager daemon compute-0.nbdygh
Jan 23 09:51:25 compute-0 ceph-mon[74335]: osdmap e33: 2 total, 2 up, 2 in
Jan 23 09:51:25 compute-0 ceph-mon[74335]: mgrmap e19: compute-0.nbdygh(active, starting, since 0.060897s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'alerts'
Jan 23 09:51:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:25.496+0000 7f75ec31b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'balancer'
Jan 23 09:51:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:25.585+0000 7f75ec31b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:51:25 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'cephadm'
Jan 23 09:51:26 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 23 09:51:26 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 23 09:51:26 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'crash'
Jan 23 09:51:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:26.611+0000 7f75ec31b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:51:26 compute-0 ceph-mgr[74633]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:51:26 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'dashboard'
Jan 23 09:51:26 compute-0 ceph-mon[74335]: 7.1b scrub starts
Jan 23 09:51:26 compute-0 ceph-mon[74335]: 7.1b scrub ok
Jan 23 09:51:26 compute-0 ceph-mon[74335]: 6.7 scrub starts
Jan 23 09:51:26 compute-0 ceph-mon[74335]: 6.7 scrub ok
Jan 23 09:51:27 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 23 09:51:27 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'devicehealth'
Jan 23 09:51:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:27.408+0000 7f75ec31b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 09:51:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 09:51:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 09:51:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   from numpy import show_config as show_numpy_config
Jan 23 09:51:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:27.636+0000 7f75ec31b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'influx'
Jan 23 09:51:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:27.718+0000 7f75ec31b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'insights'
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'iostat'
Jan 23 09:51:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:27.880+0000 7f75ec31b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:51:27 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'k8sevents'
Jan 23 09:51:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:27 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 23 09:51:27 compute-0 systemd[75668]: Activating special unit Exit the Session...
Jan 23 09:51:27 compute-0 systemd[75668]: Stopped target Main User Target.
Jan 23 09:51:27 compute-0 systemd[75668]: Stopped target Basic System.
Jan 23 09:51:27 compute-0 systemd[75668]: Stopped target Paths.
Jan 23 09:51:27 compute-0 systemd[75668]: Stopped target Sockets.
Jan 23 09:51:27 compute-0 systemd[75668]: Stopped target Timers.
Jan 23 09:51:27 compute-0 systemd[75668]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 23 09:51:27 compute-0 systemd[75668]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 09:51:27 compute-0 systemd[75668]: Closed D-Bus User Message Bus Socket.
Jan 23 09:51:27 compute-0 systemd[75668]: Stopped Create User's Volatile Files and Directories.
Jan 23 09:51:27 compute-0 systemd[75668]: Removed slice User Application Slice.
Jan 23 09:51:27 compute-0 systemd[75668]: Reached target Shutdown.
Jan 23 09:51:27 compute-0 systemd[75668]: Finished Exit the Session.
Jan 23 09:51:27 compute-0 systemd[75668]: Reached target Exit the Session.
Jan 23 09:51:27 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 23 09:51:27 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 23 09:51:27 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 23 09:51:27 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 23 09:51:27 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 23 09:51:27 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 23 09:51:27 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 23 09:51:27 compute-0 systemd[1]: user-42477.slice: Consumed 26.103s CPU time.
Jan 23 09:51:27 compute-0 ceph-mon[74335]: 7.18 scrub starts
Jan 23 09:51:27 compute-0 ceph-mon[74335]: 7.18 scrub ok
Jan 23 09:51:27 compute-0 ceph-mon[74335]: 4.d scrub starts
Jan 23 09:51:27 compute-0 ceph-mon[74335]: 4.d scrub ok
Jan 23 09:51:27 compute-0 ceph-mon[74335]: 2.1b scrub starts
Jan 23 09:51:27 compute-0 ceph-mon[74335]: 2.1b scrub ok
Jan 23 09:51:27 compute-0 ceph-mon[74335]: 3.c scrub starts
Jan 23 09:51:27 compute-0 ceph-mon[74335]: 3.c scrub ok
Jan 23 09:51:28 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 23 09:51:28 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 23 09:51:28 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'localpool'
Jan 23 09:51:28 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 09:51:28 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mirroring'
Jan 23 09:51:28 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'nfs'
Jan 23 09:51:29 compute-0 ceph-mon[74335]: 7.6 scrub starts
Jan 23 09:51:29 compute-0 ceph-mon[74335]: 7.6 scrub ok
Jan 23 09:51:29 compute-0 ceph-mon[74335]: 3.d deep-scrub starts
Jan 23 09:51:29 compute-0 ceph-mon[74335]: 3.d deep-scrub ok
Jan 23 09:51:29 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.1e deep-scrub starts
Jan 23 09:51:29 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.1e deep-scrub ok
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'orchestrator'
Jan 23 09:51:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:29.217+0000 7f75ec31b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:29.485+0000 7f75ec31b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 09:51:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:29.570+0000 7f75ec31b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_support'
Jan 23 09:51:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:29.648+0000 7f75ec31b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 09:51:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:29.751+0000 7f75ec31b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'progress'
Jan 23 09:51:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:29.829+0000 7f75ec31b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:51:29 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'prometheus'
Jan 23 09:51:30 compute-0 ceph-mon[74335]: 7.1e deep-scrub starts
Jan 23 09:51:30 compute-0 ceph-mon[74335]: 7.1e deep-scrub ok
Jan 23 09:51:30 compute-0 ceph-mon[74335]: 5.9 scrub starts
Jan 23 09:51:30 compute-0 ceph-mon[74335]: 5.9 scrub ok
Jan 23 09:51:30 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 23 09:51:30 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 23 09:51:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:30.227+0000 7f75ec31b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:51:30 compute-0 ceph-mgr[74633]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:51:30 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rbd_support'
Jan 23 09:51:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:30.335+0000 7f75ec31b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:51:30 compute-0 ceph-mgr[74633]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:51:30 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'restful'
Jan 23 09:51:30 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.jmakme restarted
Jan 23 09:51:30 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.jmakme started
Jan 23 09:51:30 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rgw'
Jan 23 09:51:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:30.841+0000 7f75ec31b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:51:30 compute-0 ceph-mgr[74633]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:51:30 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rook'
Jan 23 09:51:31 compute-0 ceph-mon[74335]: 7.2 scrub starts
Jan 23 09:51:31 compute-0 ceph-mon[74335]: 7.2 scrub ok
Jan 23 09:51:31 compute-0 ceph-mon[74335]: Standby manager daemon compute-1.jmakme restarted
Jan 23 09:51:31 compute-0 ceph-mon[74335]: Standby manager daemon compute-1.jmakme started
Jan 23 09:51:31 compute-0 ceph-mon[74335]: 4.8 scrub starts
Jan 23 09:51:31 compute-0 ceph-mon[74335]: 4.8 scrub ok
Jan 23 09:51:31 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 23 09:51:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.nbdygh(active, starting, since 6s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:31 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 23 09:51:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:31.534+0000 7f75ec31b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'selftest'
Jan 23 09:51:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:31.621+0000 7f75ec31b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'snap_schedule'
Jan 23 09:51:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:31.713+0000 7f75ec31b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'stats'
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'status'
Jan 23 09:51:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:31.896+0000 7f75ec31b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telegraf'
Jan 23 09:51:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:31.983+0000 7f75ec31b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:51:31 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telemetry'
Jan 23 09:51:32 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 23 09:51:32 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 23 09:51:32 compute-0 ceph-mon[74335]: 7.3 scrub starts
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mgrmap e20: compute-0.nbdygh(active, starting, since 6s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:32 compute-0 ceph-mon[74335]: 7.3 scrub ok
Jan 23 09:51:32 compute-0 ceph-mon[74335]: 6.a deep-scrub starts
Jan 23 09:51:32 compute-0 ceph-mon[74335]: 6.a deep-scrub ok
Jan 23 09:51:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:32.175+0000 7f75ec31b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uczrot restarted
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uczrot started
Jan 23 09:51:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:32.466+0000 7f75ec31b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'volumes'
Jan 23 09:51:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:32.789+0000 7f75ec31b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'zabbix'
Jan 23 09:51:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:51:32.883+0000 7f75ec31b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Active manager daemon compute-0.nbdygh restarted
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.nbdygh
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: ms_deliver_dispatch: unhandled message 0x563a086ab860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e34 e34: 2 total, 2 up, 2 in
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.nbdygh(active, starting, since 0.0330109s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr handle_mgr_map Activating!
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr handle_mgr_map I am now activating
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.uczrot", "id": "compute-2.uczrot"} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uczrot", "id": "compute-2.uczrot"}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.jmakme", "id": "compute-1.jmakme"} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jmakme", "id": "compute-1.jmakme"}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e1 all = 1
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: balancer
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Manager daemon compute-0.nbdygh is now available
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Starting
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:51:32
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: cephadm
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: crash
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: dashboard
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [dashboard INFO sso] Loading SSO DB version=1
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: devicehealth
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Starting
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: iostat
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: nfs
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: orchestrator
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: pg_autoscaler
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: progress
Jan 23 09:51:32 compute-0 ceph-mgr[74633]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [progress INFO root] Loading...
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f756a65b8e0>, <progress.module.GhostEvent object at 0x7f756a65bb50>, <progress.module.GhostEvent object at 0x7f756a65bb80>, <progress.module.GhostEvent object at 0x7f756a65bbb0>, <progress.module.GhostEvent object at 0x7f756a65bbe0>, <progress.module.GhostEvent object at 0x7f756a65bc10>, <progress.module.GhostEvent object at 0x7f756a65bc40>, <progress.module.GhostEvent object at 0x7f756a65bc70>, <progress.module.GhostEvent object at 0x7f756a65bca0>, <progress.module.GhostEvent object at 0x7f756a65bcd0>] historic events
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded OSDMap, ready.
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] recovery thread starting
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] starting setup
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: rbd_support
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: restful
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [restful INFO root] server_addr: :: server_port: 8003
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [restful WARNING root] server not running: no certificate configured
Jan 23 09:51:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"} v 0)
Jan 23 09:51:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: status
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: telemetry
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] PerfHandler: starting
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TaskHandler: starting
Jan 23 09:51:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"} v 0)
Jan 23 09:51:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: volumes
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] setup complete
Jan 23 09:51:33 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.f deep-scrub starts
Jan 23 09:51:33 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.f deep-scrub ok
Jan 23 09:51:33 compute-0 ceph-mon[74335]: 7.e scrub starts
Jan 23 09:51:33 compute-0 ceph-mon[74335]: 7.e scrub ok
Jan 23 09:51:33 compute-0 ceph-mon[74335]: Standby manager daemon compute-2.uczrot restarted
Jan 23 09:51:33 compute-0 ceph-mon[74335]: Standby manager daemon compute-2.uczrot started
Jan 23 09:51:33 compute-0 ceph-mon[74335]: 3.a scrub starts
Jan 23 09:51:33 compute-0 ceph-mon[74335]: 3.a scrub ok
Jan 23 09:51:33 compute-0 ceph-mon[74335]: Active manager daemon compute-0.nbdygh restarted
Jan 23 09:51:33 compute-0 ceph-mon[74335]: Activating manager daemon compute-0.nbdygh
Jan 23 09:51:33 compute-0 ceph-mon[74335]: osdmap e34: 2 total, 2 up, 2 in
Jan 23 09:51:33 compute-0 ceph-mon[74335]: mgrmap e21: compute-0.nbdygh(active, starting, since 0.0330109s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uczrot", "id": "compute-2.uczrot"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jmakme", "id": "compute-1.jmakme"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: Manager daemon compute-0.nbdygh is now available
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 23 09:51:33 compute-0 sshd-session[90009]: Accepted publickey for ceph-admin from 192.168.122.100 port 57554 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:51:33 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 23 09:51:33 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 23 09:51:33 compute-0 systemd-logind[784]: New session 35 of user ceph-admin.
Jan 23 09:51:33 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 23 09:51:33 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 23 09:51:33 compute-0 systemd[90024]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.module] Engine started.
Jan 23 09:51:33 compute-0 systemd[90024]: Queued start job for default target Main User Target.
Jan 23 09:51:33 compute-0 systemd[90024]: Created slice User Application Slice.
Jan 23 09:51:33 compute-0 systemd[90024]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 09:51:33 compute-0 systemd[90024]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 09:51:33 compute-0 systemd[90024]: Reached target Paths.
Jan 23 09:51:33 compute-0 systemd[90024]: Reached target Timers.
Jan 23 09:51:33 compute-0 systemd[90024]: Starting D-Bus User Message Bus Socket...
Jan 23 09:51:33 compute-0 systemd[90024]: Starting Create User's Volatile Files and Directories...
Jan 23 09:51:33 compute-0 systemd[90024]: Listening on D-Bus User Message Bus Socket.
Jan 23 09:51:33 compute-0 systemd[90024]: Finished Create User's Volatile Files and Directories.
Jan 23 09:51:33 compute-0 systemd[90024]: Reached target Sockets.
Jan 23 09:51:33 compute-0 systemd[90024]: Reached target Basic System.
Jan 23 09:51:33 compute-0 systemd[90024]: Reached target Main User Target.
Jan 23 09:51:33 compute-0 systemd[90024]: Startup finished in 120ms.
Jan 23 09:51:33 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 23 09:51:33 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Jan 23 09:51:33 compute-0 sshd-session[90009]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:51:33 compute-0 sudo[90041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:33 compute-0 sudo[90041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:33 compute-0 sudo[90041]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:33 compute-0 sudo[90066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 09:51:33 compute-0 sudo[90066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:33 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.nbdygh(active, since 1.09481s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14346 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 23 09:51:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 23 09:51:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 23 09:51:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 23 09:51:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 23 09:51:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 23 09:51:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 23 09:51:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 23 09:51:34 compute-0 ceph-mon[74335]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 23 09:51:34 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 23 09:51:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0[74331]: 2026-01-23T09:51:33.999+0000 7f9ad8464640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 23 09:51:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 23 09:51:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e2 new map
Jan 23 09:51:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-23T09:51:34:000852+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T09:51:34.000760+0000
                                           modified        2026-01-23T09:51:34.000760+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 23 09:51:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e35 e35: 2 total, 2 up, 2 in
Jan 23 09:51:34 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e35: 2 total, 2 up, 2 in
Jan 23 09:51:34 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 09:51:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 23 09:51:34 compute-0 systemd[1]: libpod-7c2dd66bf8714f691a54cf3d644f880211d55f0ffbf5c432545af8a405d635b8.scope: Deactivated successfully.
Jan 23 09:51:34 compute-0 podman[89821]: 2026-01-23 09:51:34.085711693 +0000 UTC m=+13.566852377 container died 7c2dd66bf8714f691a54cf3d644f880211d55f0ffbf5c432545af8a405d635b8 (image=quay.io/ceph/ceph:v19, name=happy_ride, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 09:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b63f57f31ef38c4e9073ce2a5a77beb9709f9bbc0999951dbe25d3e2e1394997-merged.mount: Deactivated successfully.
Jan 23 09:51:34 compute-0 podman[89821]: 2026-01-23 09:51:34.150597192 +0000 UTC m=+13.631737866 container remove 7c2dd66bf8714f691a54cf3d644f880211d55f0ffbf5c432545af8a405d635b8 (image=quay.io/ceph/ceph:v19, name=happy_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:34 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 23 09:51:34 compute-0 systemd[1]: libpod-conmon-7c2dd66bf8714f691a54cf3d644f880211d55f0ffbf5c432545af8a405d635b8.scope: Deactivated successfully.
Jan 23 09:51:34 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:34] ENGINE Bus STARTING
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:34] ENGINE Bus STARTING
Jan 23 09:51:34 compute-0 sudo[89818]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:34] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:34] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:51:34 compute-0 sudo[90200]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwowjzqadodisadtextqrcdmkvafgaq ; /usr/bin/python3'
Jan 23 09:51:34 compute-0 sudo[90200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:34] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:34] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:34] ENGINE Bus STARTED
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:34] ENGINE Bus STARTED
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:51:34] ENGINE Client ('192.168.122.100', 48072) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:51:34] ENGINE Client ('192.168.122.100', 48072) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:51:34 compute-0 podman[90215]: 2026-01-23 09:51:34.461680497 +0000 UTC m=+0.070108443 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:34 compute-0 python3[90210]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:34 compute-0 podman[90235]: 2026-01-23 09:51:34.563915772 +0000 UTC m=+0.041429831 container create d5e59d88d59c4aca5b15655fd5e56be908b4cb3c56160c87d0b3eba0b948bba1 (image=quay.io/ceph/ceph:v19, name=modest_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 09:51:34 compute-0 ceph-mon[74335]: 7.f deep-scrub starts
Jan 23 09:51:34 compute-0 ceph-mon[74335]: 7.f deep-scrub ok
Jan 23 09:51:34 compute-0 ceph-mon[74335]: 3.e deep-scrub starts
Jan 23 09:51:34 compute-0 ceph-mon[74335]: 3.e deep-scrub ok
Jan 23 09:51:34 compute-0 ceph-mon[74335]: mgrmap e22: compute-0.nbdygh(active, since 1.09481s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:34 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 23 09:51:34 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 23 09:51:34 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 23 09:51:34 compute-0 ceph-mon[74335]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 23 09:51:34 compute-0 ceph-mon[74335]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 23 09:51:34 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 23 09:51:34 compute-0 ceph-mon[74335]: osdmap e35: 2 total, 2 up, 2 in
Jan 23 09:51:34 compute-0 ceph-mon[74335]: fsmap cephfs:0
Jan 23 09:51:34 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:34 compute-0 podman[90215]: 2026-01-23 09:51:34.577679384 +0000 UTC m=+0.186107300 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:34 compute-0 systemd[1]: Started libpod-conmon-d5e59d88d59c4aca5b15655fd5e56be908b4cb3c56160c87d0b3eba0b948bba1.scope.
Jan 23 09:51:34 compute-0 podman[90235]: 2026-01-23 09:51:34.546574828 +0000 UTC m=+0.024088857 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95bf14212c42fb69ce8305706f98b5f8e31e214890e1bb062f60a4ecdb5f104/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95bf14212c42fb69ce8305706f98b5f8e31e214890e1bb062f60a4ecdb5f104/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95bf14212c42fb69ce8305706f98b5f8e31e214890e1bb062f60a4ecdb5f104/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:34 compute-0 podman[90235]: 2026-01-23 09:51:34.762834709 +0000 UTC m=+0.240348748 container init d5e59d88d59c4aca5b15655fd5e56be908b4cb3c56160c87d0b3eba0b948bba1 (image=quay.io/ceph/ceph:v19, name=modest_williams, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:34 compute-0 podman[90235]: 2026-01-23 09:51:34.772448045 +0000 UTC m=+0.249962064 container start d5e59d88d59c4aca5b15655fd5e56be908b4cb3c56160c87d0b3eba0b948bba1 (image=quay.io/ceph/ceph:v19, name=modest_williams, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 23 09:51:34 compute-0 podman[90235]: 2026-01-23 09:51:34.780746827 +0000 UTC m=+0.258260846 container attach d5e59d88d59c4aca5b15655fd5e56be908b4cb3c56160c87d0b3eba0b948bba1 (image=quay.io/ceph/ceph:v19, name=modest_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 23 09:51:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:51:35 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Check health
Jan 23 09:51:35 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 23 09:51:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 23 09:51:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:51:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14376 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:35 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:35 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:35 compute-0 podman[90380]: 2026-01-23 09:51:35.168096813 +0000 UTC m=+0.055789798 container exec 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:51:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 09:51:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 modest_williams[90262]: Scheduled mds.cephfs update...
Jan 23 09:51:35 compute-0 systemd[1]: libpod-d5e59d88d59c4aca5b15655fd5e56be908b4cb3c56160c87d0b3eba0b948bba1.scope: Deactivated successfully.
Jan 23 09:51:35 compute-0 podman[90235]: 2026-01-23 09:51:35.233029283 +0000 UTC m=+0.710543312 container died d5e59d88d59c4aca5b15655fd5e56be908b4cb3c56160c87d0b3eba0b948bba1 (image=quay.io/ceph/ceph:v19, name=modest_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 09:51:35 compute-0 podman[90405]: 2026-01-23 09:51:35.24775358 +0000 UTC m=+0.059627536 container exec_died 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:51:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:51:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:51:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b95bf14212c42fb69ce8305706f98b5f8e31e214890e1bb062f60a4ecdb5f104-merged.mount: Deactivated successfully.
Jan 23 09:51:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 podman[90235]: 2026-01-23 09:51:35.371442073 +0000 UTC m=+0.848956092 container remove d5e59d88d59c4aca5b15655fd5e56be908b4cb3c56160c87d0b3eba0b948bba1 (image=quay.io/ceph/ceph:v19, name=modest_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:35 compute-0 podman[90380]: 2026-01-23 09:51:35.383708086 +0000 UTC m=+0.271401051 container exec_died 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:51:35 compute-0 systemd[1]: libpod-conmon-d5e59d88d59c4aca5b15655fd5e56be908b4cb3c56160c87d0b3eba0b948bba1.scope: Deactivated successfully.
Jan 23 09:51:35 compute-0 sudo[90200]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:35 compute-0 sudo[90066]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:51:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:51:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 sudo[90452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aedvvfmunnjaxtsktnipsdiacqcihnxu ; /usr/bin/python3'
Jan 23 09:51:35 compute-0 sudo[90452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:35 compute-0 sudo[90453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:35 compute-0 sudo[90453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:35 compute-0 sudo[90453]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:35 compute-0 ceph-mon[74335]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:35 compute-0 ceph-mon[74335]: 7.9 scrub starts
Jan 23 09:51:35 compute-0 ceph-mon[74335]: 7.9 scrub ok
Jan 23 09:51:35 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:34] ENGINE Bus STARTING
Jan 23 09:51:35 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:34] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:51:35 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:34] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:51:35 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:34] ENGINE Bus STARTED
Jan 23 09:51:35 compute-0 ceph-mon[74335]: [23/Jan/2026:09:51:34] ENGINE Client ('192.168.122.100', 48072) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:51:35 compute-0 ceph-mon[74335]: 4.9 scrub starts
Jan 23 09:51:35 compute-0 ceph-mon[74335]: 4.9 scrub ok
Jan 23 09:51:35 compute-0 ceph-mon[74335]: pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:35 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-mon[74335]: from='client.14376 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:35 compute-0 ceph-mon[74335]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:35 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:35 compute-0 sudo[90480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 09:51:35 compute-0 sudo[90480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:35 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.nbdygh(active, since 2s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:35 compute-0 python3[90459]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:35 compute-0 podman[90505]: 2026-01-23 09:51:35.740430228 +0000 UTC m=+0.029325641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:35 compute-0 podman[90505]: 2026-01-23 09:51:35.877623857 +0000 UTC m=+0.166519250 container create 872a3969d51c0b261d3554118ef98f8efb63419eba49e352e36fd9ffac86e20c (image=quay.io/ceph/ceph:v19, name=peaceful_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:35 compute-0 systemd[1]: Started libpod-conmon-872a3969d51c0b261d3554118ef98f8efb63419eba49e352e36fd9ffac86e20c.scope.
Jan 23 09:51:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0317c35fd23efd3a934f373196d764a653b095669d077373a52465e9a09dac04/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0317c35fd23efd3a934f373196d764a653b095669d077373a52465e9a09dac04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0317c35fd23efd3a934f373196d764a653b095669d077373a52465e9a09dac04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:35 compute-0 podman[90505]: 2026-01-23 09:51:35.970425909 +0000 UTC m=+0.259321322 container init 872a3969d51c0b261d3554118ef98f8efb63419eba49e352e36fd9ffac86e20c (image=quay.io/ceph/ceph:v19, name=peaceful_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:35 compute-0 podman[90505]: 2026-01-23 09:51:35.978718751 +0000 UTC m=+0.267614144 container start 872a3969d51c0b261d3554118ef98f8efb63419eba49e352e36fd9ffac86e20c (image=quay.io/ceph/ceph:v19, name=peaceful_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:35 compute-0 podman[90505]: 2026-01-23 09:51:35.985873274 +0000 UTC m=+0.274768667 container attach 872a3969d51c0b261d3554118ef98f8efb63419eba49e352e36fd9ffac86e20c (image=quay.io/ceph/ceph:v19, name=peaceful_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 23 09:51:36 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 23 09:51:36 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:36 compute-0 sudo[90480]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:36 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 23 09:51:36 compute-0 sudo[90573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:36 compute-0 sudo[90573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:36 compute-0 sudo[90573]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:36 compute-0 sudo[90601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 23 09:51:36 compute-0 sudo[90601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: 2.c scrub starts
Jan 23 09:51:36 compute-0 ceph-mon[74335]: 2.c scrub ok
Jan 23 09:51:36 compute-0 sudo[90601]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:36 compute-0 ceph-mon[74335]: 3.10 scrub starts
Jan 23 09:51:36 compute-0 ceph-mon[74335]: 3.10 scrub ok
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mgrmap e23: compute-0.nbdygh(active, since 2s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:51:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:51:36 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:51:36 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:51:36 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:51:36 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:51:36 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:51:36 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:51:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:36 compute-0 sudo[90644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 09:51:36 compute-0 sudo[90644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:36 compute-0 sudo[90644]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 sudo[90669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph
Jan 23 09:51:37 compute-0 sudo[90669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90669]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 23 09:51:37 compute-0 sudo[90694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:51:37 compute-0 sudo[90694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 23 09:51:37 compute-0 sudo[90694]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 sudo[90719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:51:37 compute-0 sudo[90719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90719]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 sudo[90744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:51:37 compute-0 sudo[90744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90744]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 23 09:51:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 23 09:51:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e36 e36: 2 total, 2 up, 2 in
Jan 23 09:51:37 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e36: 2 total, 2 up, 2 in
Jan 23 09:51:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Jan 23 09:51:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 23 09:51:37 compute-0 sudo[90792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:51:37 compute-0 sudo[90792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90792]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 sudo[90817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:51:37 compute-0 sudo[90817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90817]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 sudo[90842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 23 09:51:37 compute-0 sudo[90842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90842]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:37 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:37 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:37 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:37 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:37 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:37 compute-0 sudo[90867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:51:37 compute-0 sudo[90867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90867]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 sudo[90892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:51:37 compute-0 sudo[90892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90892]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 sudo[90917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:51:37 compute-0 sudo[90917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90917]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 sudo[90942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:51:37 compute-0 sudo[90942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90942]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 sudo[90967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:51:37 compute-0 sudo[90967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[90967]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 ceph-mon[74335]: 2.d scrub starts
Jan 23 09:51:37 compute-0 ceph-mon[74335]: 2.d scrub ok
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='client.14385 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 09:51:37 compute-0 ceph-mon[74335]: 3.13 scrub starts
Jan 23 09:51:37 compute-0 ceph-mon[74335]: 3.13 scrub ok
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:51:37 compute-0 ceph-mon[74335]: Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:51:37 compute-0 ceph-mon[74335]: Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:51:37 compute-0 ceph-mon[74335]: Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:51:37 compute-0 ceph-mon[74335]: pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 23 09:51:37 compute-0 ceph-mon[74335]: osdmap e36: 2 total, 2 up, 2 in
Jan 23 09:51:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 23 09:51:37 compute-0 sudo[91015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:51:37 compute-0 sudo[91015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:37 compute-0 sudo[91015]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:37 compute-0 sudo[91040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:51:37 compute-0 sudo[91040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91040]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.nbdygh(active, since 5s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:38 compute-0 sudo[91065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:38 compute-0 sudo[91065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91065]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 23 09:51:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 36 pg[8.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:51:38 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 23 09:51:38 compute-0 sudo[91090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 09:51:38 compute-0 sudo[91090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91090]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 sudo[91115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph
Jan 23 09:51:38 compute-0 sudo[91115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91115]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 23 09:51:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 23 09:51:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e37 e37: 2 total, 2 up, 2 in
Jan 23 09:51:38 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e37: 2 total, 2 up, 2 in
Jan 23 09:51:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 37 pg[8.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:51:38 compute-0 sudo[91140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:51:38 compute-0 sudo[91140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91140]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 23 09:51:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:38 compute-0 systemd[1]: libpod-872a3969d51c0b261d3554118ef98f8efb63419eba49e352e36fd9ffac86e20c.scope: Deactivated successfully.
Jan 23 09:51:38 compute-0 sudo[91175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:51:38 compute-0 sudo[91175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91175]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 podman[91197]: 2026-01-23 09:51:38.399127127 +0000 UTC m=+0.026617312 container died 872a3969d51c0b261d3554118ef98f8efb63419eba49e352e36fd9ffac86e20c (image=quay.io/ceph/ceph:v19, name=peaceful_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 09:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0317c35fd23efd3a934f373196d764a653b095669d077373a52465e9a09dac04-merged.mount: Deactivated successfully.
Jan 23 09:51:38 compute-0 sudo[91212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:51:38 compute-0 sudo[91212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91212]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 podman[91197]: 2026-01-23 09:51:38.470918663 +0000 UTC m=+0.098408818 container remove 872a3969d51c0b261d3554118ef98f8efb63419eba49e352e36fd9ffac86e20c (image=quay.io/ceph/ceph:v19, name=peaceful_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 09:51:38 compute-0 systemd[1]: libpod-conmon-872a3969d51c0b261d3554118ef98f8efb63419eba49e352e36fd9ffac86e20c.scope: Deactivated successfully.
Jan 23 09:51:38 compute-0 sudo[90452]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 sudo[91264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:51:38 compute-0 sudo[91264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91264]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 sudo[91289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:51:38 compute-0 sudo[91289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91289]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 sudo[91315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 sudo[91315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91315]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 sudo[91340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:51:38 compute-0 sudo[91340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91340]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 sudo[91365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:51:38 compute-0 sudo[91365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91365]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 sudo[91390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:51:38 compute-0 sudo[91390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91390]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v9: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:38 compute-0 sudo[91415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:51:38 compute-0 sudo[91415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:38 compute-0 sudo[91415]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:39 compute-0 ceph-mon[74335]: 2.e scrub starts
Jan 23 09:51:39 compute-0 ceph-mon[74335]: 2.e scrub ok
Jan 23 09:51:39 compute-0 ceph-mon[74335]: Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:39 compute-0 ceph-mon[74335]: Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:39 compute-0 ceph-mon[74335]: Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:51:39 compute-0 ceph-mon[74335]: 3.f scrub starts
Jan 23 09:51:39 compute-0 ceph-mon[74335]: 3.f scrub ok
Jan 23 09:51:39 compute-0 ceph-mon[74335]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mgrmap e24: compute-0.nbdygh(active, since 5s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:39 compute-0 ceph-mon[74335]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:39 compute-0 ceph-mon[74335]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:51:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 23 09:51:39 compute-0 ceph-mon[74335]: osdmap e37: 2 total, 2 up, 2 in
Jan 23 09:51:39 compute-0 ceph-mon[74335]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:39 compute-0 ceph-mon[74335]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 09:51:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:51:39 compute-0 sudo[91440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:51:39 compute-0 sudo[91440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:39 compute-0 sudo[91440]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:51:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:39 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 23 09:51:39 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 23 09:51:39 compute-0 sudo[91488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:51:39 compute-0 sudo[91488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:39 compute-0 sudo[91488]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:39 compute-0 sudo[91513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:51:39 compute-0 sudo[91513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:39 compute-0 sudo[91513]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e38 e38: 2 total, 2 up, 2 in
Jan 23 09:51:39 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e38: 2 total, 2 up, 2 in
Jan 23 09:51:39 compute-0 sudo[91541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:39 compute-0 sudo[91541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:39 compute-0 sudo[91541]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:51:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:51:39 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.nbdygh(active, since 6s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:39 compute-0 sudo[91638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsgsphtzsbmmoopsepnzfvxmjsioczjo ; /usr/bin/python3'
Jan 23 09:51:39 compute-0 sudo[91638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:39 compute-0 python3[91640]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 09:51:39 compute-0 sudo[91638]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:51:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:51:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:51:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:39 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev f047efc7-07f7-4bc2-97ab-19121ab991b4 (Updating node-exporter deployment (+1 -> 3))
Jan 23 09:51:39 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Jan 23 09:51:39 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Jan 23 09:51:39 compute-0 sudo[91711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arvibkclupwcuvrtqkiobyoerrwkcqby ; /usr/bin/python3'
Jan 23 09:51:39 compute-0 sudo[91711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:39 compute-0 python3[91713]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769161899.2898505-37612-157408285854808/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=a6273c4bda164a032598e5e81cbd7f6e9c0876d5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:51:39 compute-0 sudo[91711]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:40 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 23 09:51:40 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 23 09:51:40 compute-0 ceph-mon[74335]: 2.10 scrub starts
Jan 23 09:51:40 compute-0 ceph-mon[74335]: 2.10 scrub ok
Jan 23 09:51:40 compute-0 ceph-mon[74335]: Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:40 compute-0 ceph-mon[74335]: 4.15 scrub starts
Jan 23 09:51:40 compute-0 ceph-mon[74335]: 4.15 scrub ok
Jan 23 09:51:40 compute-0 ceph-mon[74335]: Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:40 compute-0 ceph-mon[74335]: pgmap v9: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:40 compute-0 ceph-mon[74335]: Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:51:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:40 compute-0 ceph-mon[74335]: 2.13 scrub starts
Jan 23 09:51:40 compute-0 ceph-mon[74335]: 2.13 scrub ok
Jan 23 09:51:40 compute-0 ceph-mon[74335]: osdmap e38: 2 total, 2 up, 2 in
Jan 23 09:51:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:40 compute-0 ceph-mon[74335]: mgrmap e25: compute-0.nbdygh(active, since 6s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:51:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:40 compute-0 ceph-mon[74335]: 5.15 scrub starts
Jan 23 09:51:40 compute-0 ceph-mon[74335]: 5.15 scrub ok
Jan 23 09:51:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:40 compute-0 sudo[91761]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viiapfaqjcibzkotrqhpbivxuwinaequ ; /usr/bin/python3'
Jan 23 09:51:40 compute-0 sudo[91761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:40 compute-0 python3[91763]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:40 compute-0 podman[91764]: 2026-01-23 09:51:40.593675987 +0000 UTC m=+0.112454607 container create 0932bbe46a3bb41ce0932aa60158caa70ee1dbb417195ed403abc517d0f72224 (image=quay.io/ceph/ceph:v19, name=mystifying_volhard, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:51:40 compute-0 podman[91764]: 2026-01-23 09:51:40.506897887 +0000 UTC m=+0.025676537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:40 compute-0 systemd[1]: Started libpod-conmon-0932bbe46a3bb41ce0932aa60158caa70ee1dbb417195ed403abc517d0f72224.scope.
Jan 23 09:51:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628bcc5fe13ea69075e783ceafbddacccbcf214756bb400f131501ea3ad28410/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628bcc5fe13ea69075e783ceafbddacccbcf214756bb400f131501ea3ad28410/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:40 compute-0 podman[91764]: 2026-01-23 09:51:40.784952148 +0000 UTC m=+0.303730788 container init 0932bbe46a3bb41ce0932aa60158caa70ee1dbb417195ed403abc517d0f72224 (image=quay.io/ceph/ceph:v19, name=mystifying_volhard, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 09:51:40 compute-0 podman[91764]: 2026-01-23 09:51:40.792152632 +0000 UTC m=+0.310931252 container start 0932bbe46a3bb41ce0932aa60158caa70ee1dbb417195ed403abc517d0f72224 (image=quay.io/ceph/ceph:v19, name=mystifying_volhard, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 09:51:40 compute-0 podman[91764]: 2026-01-23 09:51:40.797838047 +0000 UTC m=+0.316616697 container attach 0932bbe46a3bb41ce0932aa60158caa70ee1dbb417195ed403abc517d0f72224 (image=quay.io/ceph/ceph:v19, name=mystifying_volhard, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 09:51:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 23 09:51:41 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 23 09:51:41 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 23 09:51:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 23 09:51:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/992291970' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 23 09:51:41 compute-0 ceph-mon[74335]: Deploying daemon node-exporter.compute-2 on compute-2
Jan 23 09:51:41 compute-0 ceph-mon[74335]: 2.15 scrub starts
Jan 23 09:51:41 compute-0 ceph-mon[74335]: 2.15 scrub ok
Jan 23 09:51:41 compute-0 ceph-mon[74335]: 5.16 scrub starts
Jan 23 09:51:41 compute-0 ceph-mon[74335]: 5.16 scrub ok
Jan 23 09:51:42 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 23 09:51:42 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 23 09:51:42 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/992291970' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 23 09:51:42 compute-0 systemd[1]: libpod-0932bbe46a3bb41ce0932aa60158caa70ee1dbb417195ed403abc517d0f72224.scope: Deactivated successfully.
Jan 23 09:51:42 compute-0 podman[91764]: 2026-01-23 09:51:42.842318481 +0000 UTC m=+2.361097101 container died 0932bbe46a3bb41ce0932aa60158caa70ee1dbb417195ed403abc517d0f72224 (image=quay.io/ceph/ceph:v19, name=mystifying_volhard, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 09:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-628bcc5fe13ea69075e783ceafbddacccbcf214756bb400f131501ea3ad28410-merged.mount: Deactivated successfully.
Jan 23 09:51:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 23 09:51:43 compute-0 podman[91764]: 2026-01-23 09:51:43.116839161 +0000 UTC m=+2.635617781 container remove 0932bbe46a3bb41ce0932aa60158caa70ee1dbb417195ed403abc517d0f72224 (image=quay.io/ceph/ceph:v19, name=mystifying_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 09:51:43 compute-0 sudo[91761]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:43 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 23 09:51:43 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 23 09:51:43 compute-0 systemd[1]: libpod-conmon-0932bbe46a3bb41ce0932aa60158caa70ee1dbb417195ed403abc517d0f72224.scope: Deactivated successfully.
Jan 23 09:51:43 compute-0 ceph-mon[74335]: pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 23 09:51:43 compute-0 ceph-mon[74335]: 2.19 scrub starts
Jan 23 09:51:43 compute-0 ceph-mon[74335]: 2.19 scrub ok
Jan 23 09:51:43 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/992291970' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 23 09:51:43 compute-0 ceph-mon[74335]: 4.13 scrub starts
Jan 23 09:51:43 compute-0 ceph-mon[74335]: 4.13 scrub ok
Jan 23 09:51:43 compute-0 sudo[91838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otujfixrjkgqmdbvxufivfmwnqxuhrvw ; /usr/bin/python3'
Jan 23 09:51:43 compute-0 sudo[91838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:43 compute-0 python3[91840]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:43 compute-0 podman[91842]: 2026-01-23 09:51:43.919191668 +0000 UTC m=+0.042545199 container create d47d7c9a8c6b5a3ca422cc8c25f399a12cb440c70ad9e51be899681f24bd5a94 (image=quay.io/ceph/ceph:v19, name=distracted_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 09:51:43 compute-0 systemd[1]: Started libpod-conmon-d47d7c9a8c6b5a3ca422cc8c25f399a12cb440c70ad9e51be899681f24bd5a94.scope.
Jan 23 09:51:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:43 compute-0 podman[91842]: 2026-01-23 09:51:43.901874155 +0000 UTC m=+0.025227696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/324f3c4cdea60157153e42ae6d7a79e228b1db3ae56b247d85f4f84a82dba983/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/324f3c4cdea60157153e42ae6d7a79e228b1db3ae56b247d85f4f84a82dba983/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:44 compute-0 podman[91842]: 2026-01-23 09:51:44.031743207 +0000 UTC m=+0.155096768 container init d47d7c9a8c6b5a3ca422cc8c25f399a12cb440c70ad9e51be899681f24bd5a94 (image=quay.io/ceph/ceph:v19, name=distracted_austin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 09:51:44 compute-0 podman[91842]: 2026-01-23 09:51:44.038209082 +0000 UTC m=+0.161562613 container start d47d7c9a8c6b5a3ca422cc8c25f399a12cb440c70ad9e51be899681f24bd5a94 (image=quay.io/ceph/ceph:v19, name=distracted_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 09:51:44 compute-0 podman[91842]: 2026-01-23 09:51:44.042854461 +0000 UTC m=+0.166208012 container attach d47d7c9a8c6b5a3ca422cc8c25f399a12cb440c70ad9e51be899681f24bd5a94 (image=quay.io/ceph/ceph:v19, name=distracted_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Jan 23 09:51:44 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 23 09:51:44 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 23 09:51:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:51:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:51:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 23 09:51:44 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1904452043' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:51:44 compute-0 distracted_austin[91859]: 
Jan 23 09:51:44 compute-0 distracted_austin[91859]: {"fsid":"f3005f84-239a-55b6-a948-8f1fb592b920","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":51,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":38,"num_osds":2,"num_up_osds":2,"osd_up_since":1769161804,"num_in_osds":2,"osd_in_since":1769161767,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194}],"num_pgs":194,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":56172544,"bytes_avail":42885111808,"bytes_total":42941284352,"read_bytes_sec":30028,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2026-01-23T09:51:34:000852+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":3,"modified":"2026-01-23T09:50:57.514849+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"f047efc7-07f7-4bc2-97ab-19121ab991b4":{"message":"Updating node-exporter deployment (+1 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 23 09:51:44 compute-0 systemd[1]: libpod-d47d7c9a8c6b5a3ca422cc8c25f399a12cb440c70ad9e51be899681f24bd5a94.scope: Deactivated successfully.
Jan 23 09:51:44 compute-0 podman[91842]: 2026-01-23 09:51:44.511387722 +0000 UTC m=+0.634741253 container died d47d7c9a8c6b5a3ca422cc8c25f399a12cb440c70ad9e51be899681f24bd5a94 (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 23 09:51:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-324f3c4cdea60157153e42ae6d7a79e228b1db3ae56b247d85f4f84a82dba983-merged.mount: Deactivated successfully.
Jan 23 09:51:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 23 09:51:44 compute-0 ceph-mon[74335]: 7.4 scrub starts
Jan 23 09:51:44 compute-0 ceph-mon[74335]: 7.4 scrub ok
Jan 23 09:51:44 compute-0 ceph-mon[74335]: 5.11 scrub starts
Jan 23 09:51:44 compute-0 ceph-mon[74335]: 5.11 scrub ok
Jan 23 09:51:44 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/992291970' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 23 09:51:44 compute-0 ceph-mon[74335]: pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 23 09:51:44 compute-0 ceph-mon[74335]: 7.8 scrub starts
Jan 23 09:51:44 compute-0 ceph-mon[74335]: 7.8 scrub ok
Jan 23 09:51:44 compute-0 ceph-mon[74335]: 3.14 scrub starts
Jan 23 09:51:44 compute-0 ceph-mon[74335]: 3.14 scrub ok
Jan 23 09:51:44 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:44 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1904452043' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:51:44 compute-0 podman[91842]: 2026-01-23 09:51:44.826131271 +0000 UTC m=+0.949484802 container remove d47d7c9a8c6b5a3ca422cc8c25f399a12cb440c70ad9e51be899681f24bd5a94 (image=quay.io/ceph/ceph:v19, name=distracted_austin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:44 compute-0 systemd[1]: libpod-conmon-d47d7c9a8c6b5a3ca422cc8c25f399a12cb440c70ad9e51be899681f24bd5a94.scope: Deactivated successfully.
Jan 23 09:51:44 compute-0 sudo[91838]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Jan 23 09:51:45 compute-0 sudo[91921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixbtzakudztqztunlheogwutvplbzbwi ; /usr/bin/python3'
Jan 23 09:51:45 compute-0 sudo[91921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:45 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev f047efc7-07f7-4bc2-97ab-19121ab991b4 (Updating node-exporter deployment (+1 -> 3))
Jan 23 09:51:45 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event f047efc7-07f7-4bc2-97ab-19121ab991b4 (Updating node-exporter deployment (+1 -> 3)) in 5 seconds
Jan 23 09:51:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 23 09:51:45 compute-0 python3[91923]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:45 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.14 deep-scrub starts
Jan 23 09:51:45 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.14 deep-scrub ok
Jan 23 09:51:45 compute-0 podman[91924]: 2026-01-23 09:51:45.210537522 +0000 UTC m=+0.026398536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:45 compute-0 podman[91924]: 2026-01-23 09:51:45.365419082 +0000 UTC m=+0.181280076 container create ffbc91961ec7d7c080f95cdd4eb1865bf3b03076ecf82b68f3048c39a7aeb8d1 (image=quay.io/ceph/ceph:v19, name=romantic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:51:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 09:51:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:51:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:51:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:51:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:51:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:51:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:51:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:51:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:45 compute-0 sudo[91937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:45 compute-0 sudo[91937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:45 compute-0 sudo[91937]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:45 compute-0 sudo[91962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 09:51:45 compute-0 sudo[91962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:45 compute-0 systemd[1]: Started libpod-conmon-ffbc91961ec7d7c080f95cdd4eb1865bf3b03076ecf82b68f3048c39a7aeb8d1.scope.
Jan 23 09:51:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0d9745d1cc8585db1a8ddcb8bf67de13b651f87ec61fbcf0a9c5c64a5fc3c1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0d9745d1cc8585db1a8ddcb8bf67de13b651f87ec61fbcf0a9c5c64a5fc3c1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:45 compute-0 podman[91924]: 2026-01-23 09:51:45.761421749 +0000 UTC m=+0.577282783 container init ffbc91961ec7d7c080f95cdd4eb1865bf3b03076ecf82b68f3048c39a7aeb8d1 (image=quay.io/ceph/ceph:v19, name=romantic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:51:45 compute-0 podman[91924]: 2026-01-23 09:51:45.770702527 +0000 UTC m=+0.586563531 container start ffbc91961ec7d7c080f95cdd4eb1865bf3b03076ecf82b68f3048c39a7aeb8d1 (image=quay.io/ceph/ceph:v19, name=romantic_beaver, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Jan 23 09:51:45 compute-0 podman[91924]: 2026-01-23 09:51:45.872854869 +0000 UTC m=+0.688715963 container attach ffbc91961ec7d7c080f95cdd4eb1865bf3b03076ecf82b68f3048c39a7aeb8d1 (image=quay.io/ceph/ceph:v19, name=romantic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:46 compute-0 podman[92050]: 2026-01-23 09:51:46.006761853 +0000 UTC m=+0.031047465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:51:46 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 23 09:51:46 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 23 09:51:46 compute-0 ceph-mon[74335]: 7.a scrub starts
Jan 23 09:51:46 compute-0 ceph-mon[74335]: 7.a scrub ok
Jan 23 09:51:46 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:46 compute-0 ceph-mon[74335]: 5.1f deep-scrub starts
Jan 23 09:51:46 compute-0 ceph-mon[74335]: 5.1f deep-scrub ok
Jan 23 09:51:46 compute-0 ceph-mon[74335]: pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Jan 23 09:51:46 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:46 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:46 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:51:46 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:51:46 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:46 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:51:46 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:46 compute-0 podman[92050]: 2026-01-23 09:51:46.240762207 +0000 UTC m=+0.265047799 container create 6d375eceb1ae69aaae6fd7c11bee75b2610818d6111dbfba14e099ed7d659c29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_pare, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:46 compute-0 systemd[1]: Started libpod-conmon-6d375eceb1ae69aaae6fd7c11bee75b2610818d6111dbfba14e099ed7d659c29.scope.
Jan 23 09:51:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 09:51:46 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985471869' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 09:51:46 compute-0 romantic_beaver[91989]: 
Jan 23 09:51:46 compute-0 romantic_beaver[91989]: {"epoch":3,"fsid":"f3005f84-239a-55b6-a948-8f1fb592b920","modified":"2026-01-23T09:50:47.540109Z","created":"2026-01-23T09:47:35.499222Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 23 09:51:46 compute-0 romantic_beaver[91989]: dumped monmap epoch 3
Jan 23 09:51:46 compute-0 systemd[1]: libpod-ffbc91961ec7d7c080f95cdd4eb1865bf3b03076ecf82b68f3048c39a7aeb8d1.scope: Deactivated successfully.
Jan 23 09:51:46 compute-0 podman[92050]: 2026-01-23 09:51:46.351279683 +0000 UTC m=+0.375565295 container init 6d375eceb1ae69aaae6fd7c11bee75b2610818d6111dbfba14e099ed7d659c29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_pare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 09:51:46 compute-0 podman[92050]: 2026-01-23 09:51:46.359706709 +0000 UTC m=+0.383992291 container start 6d375eceb1ae69aaae6fd7c11bee75b2610818d6111dbfba14e099ed7d659c29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Jan 23 09:51:46 compute-0 inspiring_pare[92067]: 167 167
Jan 23 09:51:46 compute-0 systemd[1]: libpod-6d375eceb1ae69aaae6fd7c11bee75b2610818d6111dbfba14e099ed7d659c29.scope: Deactivated successfully.
Jan 23 09:51:46 compute-0 podman[92050]: 2026-01-23 09:51:46.38438143 +0000 UTC m=+0.408667032 container attach 6d375eceb1ae69aaae6fd7c11bee75b2610818d6111dbfba14e099ed7d659c29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_pare, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:51:46 compute-0 podman[92050]: 2026-01-23 09:51:46.384900503 +0000 UTC m=+0.409186085 container died 6d375eceb1ae69aaae6fd7c11bee75b2610818d6111dbfba14e099ed7d659c29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b38efcfd34f6b2e2f5a315e287ab29d702d5fe49aece80b4581b75dc247ddad2-merged.mount: Deactivated successfully.
Jan 23 09:51:46 compute-0 podman[92050]: 2026-01-23 09:51:46.466808058 +0000 UTC m=+0.491093640 container remove 6d375eceb1ae69aaae6fd7c11bee75b2610818d6111dbfba14e099ed7d659c29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_pare, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 09:51:46 compute-0 systemd[1]: libpod-conmon-6d375eceb1ae69aaae6fd7c11bee75b2610818d6111dbfba14e099ed7d659c29.scope: Deactivated successfully.
Jan 23 09:51:46 compute-0 podman[91924]: 2026-01-23 09:51:46.506521563 +0000 UTC m=+1.322382567 container died ffbc91961ec7d7c080f95cdd4eb1865bf3b03076ecf82b68f3048c39a7aeb8d1 (image=quay.io/ceph/ceph:v19, name=romantic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 23 09:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e0d9745d1cc8585db1a8ddcb8bf67de13b651f87ec61fbcf0a9c5c64a5fc3c1-merged.mount: Deactivated successfully.
Jan 23 09:51:46 compute-0 podman[92072]: 2026-01-23 09:51:46.812641071 +0000 UTC m=+0.477718747 container remove ffbc91961ec7d7c080f95cdd4eb1865bf3b03076ecf82b68f3048c39a7aeb8d1 (image=quay.io/ceph/ceph:v19, name=romantic_beaver, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 09:51:46 compute-0 systemd[1]: libpod-conmon-ffbc91961ec7d7c080f95cdd4eb1865bf3b03076ecf82b68f3048c39a7aeb8d1.scope: Deactivated successfully.
Jan 23 09:51:46 compute-0 sudo[91921]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:46 compute-0 podman[92109]: 2026-01-23 09:51:46.856190904 +0000 UTC m=+0.272846657 container create 8a1d577e50851cc0ead7348082618fba1eb26a286fca71627a0dfbefbb7dbc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:46 compute-0 systemd[1]: Started libpod-conmon-8a1d577e50851cc0ead7348082618fba1eb26a286fca71627a0dfbefbb7dbc82.scope.
Jan 23 09:51:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d3d3778f89256c6827fa104bac53138a6bd6cd320a4f033b0362e66f33b728/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d3d3778f89256c6827fa104bac53138a6bd6cd320a4f033b0362e66f33b728/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d3d3778f89256c6827fa104bac53138a6bd6cd320a4f033b0362e66f33b728/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d3d3778f89256c6827fa104bac53138a6bd6cd320a4f033b0362e66f33b728/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d3d3778f89256c6827fa104bac53138a6bd6cd320a4f033b0362e66f33b728/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:46 compute-0 podman[92109]: 2026-01-23 09:51:46.837441855 +0000 UTC m=+0.254097608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:51:46 compute-0 podman[92109]: 2026-01-23 09:51:46.94243388 +0000 UTC m=+0.359089653 container init 8a1d577e50851cc0ead7348082618fba1eb26a286fca71627a0dfbefbb7dbc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 09:51:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Jan 23 09:51:46 compute-0 podman[92109]: 2026-01-23 09:51:46.948826993 +0000 UTC m=+0.365482746 container start 8a1d577e50851cc0ead7348082618fba1eb26a286fca71627a0dfbefbb7dbc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_perlman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 23 09:51:46 compute-0 podman[92109]: 2026-01-23 09:51:46.952647711 +0000 UTC m=+0.369303484 container attach 8a1d577e50851cc0ead7348082618fba1eb26a286fca71627a0dfbefbb7dbc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_perlman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:51:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "2edb8fa1-89ea-44cd-9b6e-9f4d89095397"} v 0)
Jan 23 09:51:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2edb8fa1-89ea-44cd-9b6e-9f4d89095397"}]: dispatch
Jan 23 09:51:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 23 09:51:47 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 23 09:51:47 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 23 09:51:47 compute-0 xenodochial_perlman[92126]: --> passed data devices: 0 physical, 1 LVM
Jan 23 09:51:47 compute-0 xenodochial_perlman[92126]: --> All data devices are unavailable
Jan 23 09:51:47 compute-0 sudo[92164]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooweujjmnisivlpmyxmgxzlzdsrsiaqv ; /usr/bin/python3'
Jan 23 09:51:47 compute-0 sudo[92164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:47 compute-0 systemd[1]: libpod-8a1d577e50851cc0ead7348082618fba1eb26a286fca71627a0dfbefbb7dbc82.scope: Deactivated successfully.
Jan 23 09:51:47 compute-0 podman[92109]: 2026-01-23 09:51:47.319328168 +0000 UTC m=+0.735983941 container died 8a1d577e50851cc0ead7348082618fba1eb26a286fca71627a0dfbefbb7dbc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_perlman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 09:51:47 compute-0 python3[92166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2edb8fa1-89ea-44cd-9b6e-9f4d89095397"}]': finished
Jan 23 09:51:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Jan 23 09:51:47 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Jan 23 09:51:48 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 11 completed events
Jan 23 09:51:48 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 23 09:51:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:51:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:51:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:51:48 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:51:48 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 23 09:51:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6d3d3778f89256c6827fa104bac53138a6bd6cd320a4f033b0362e66f33b728-merged.mount: Deactivated successfully.
Jan 23 09:51:48 compute-0 ceph-mon[74335]: 7.14 deep-scrub starts
Jan 23 09:51:48 compute-0 ceph-mon[74335]: 7.14 deep-scrub ok
Jan 23 09:51:48 compute-0 ceph-mon[74335]: 5.10 scrub starts
Jan 23 09:51:48 compute-0 ceph-mon[74335]: 5.10 scrub ok
Jan 23 09:51:48 compute-0 ceph-mon[74335]: 7.b scrub starts
Jan 23 09:51:48 compute-0 ceph-mon[74335]: 7.b scrub ok
Jan 23 09:51:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/985471869' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 09:51:48 compute-0 ceph-mon[74335]: 6.1e deep-scrub starts
Jan 23 09:51:48 compute-0 ceph-mon[74335]: 6.1e deep-scrub ok
Jan 23 09:51:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1205331151' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2edb8fa1-89ea-44cd-9b6e-9f4d89095397"}]: dispatch
Jan 23 09:51:48 compute-0 ceph-mon[74335]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2edb8fa1-89ea-44cd-9b6e-9f4d89095397"}]: dispatch
Jan 23 09:51:48 compute-0 podman[92109]: 2026-01-23 09:51:48.731138821 +0000 UTC m=+2.147794574 container remove 8a1d577e50851cc0ead7348082618fba1eb26a286fca71627a0dfbefbb7dbc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_perlman, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:48 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:48 compute-0 sudo[91962]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:48 compute-0 systemd[1]: libpod-conmon-8a1d577e50851cc0ead7348082618fba1eb26a286fca71627a0dfbefbb7dbc82.scope: Deactivated successfully.
Jan 23 09:51:48 compute-0 podman[92178]: 2026-01-23 09:51:48.848704088 +0000 UTC m=+1.392834459 container create 40d6f5cea28c249bb0f03a17087e400a6c8346db4d9649dde35505d30cfcdd4f (image=quay.io/ceph/ceph:v19, name=gallant_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 23 09:51:48 compute-0 sudo[92191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:48 compute-0 sudo[92191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:48 compute-0 sudo[92191]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:48 compute-0 systemd[1]: Started libpod-conmon-40d6f5cea28c249bb0f03a17087e400a6c8346db4d9649dde35505d30cfcdd4f.scope.
Jan 23 09:51:48 compute-0 podman[92178]: 2026-01-23 09:51:48.818154377 +0000 UTC m=+1.362284758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ad846954426fc874d136b92f6163433ac7d66140d8c176a31c7de936bc6f7a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ad846954426fc874d136b92f6163433ac7d66140d8c176a31c7de936bc6f7a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:48 compute-0 podman[92178]: 2026-01-23 09:51:48.936629837 +0000 UTC m=+1.480760228 container init 40d6f5cea28c249bb0f03a17087e400a6c8346db4d9649dde35505d30cfcdd4f (image=quay.io/ceph/ceph:v19, name=gallant_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:51:48 compute-0 sudo[92218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 09:51:48 compute-0 sudo[92218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:48 compute-0 podman[92178]: 2026-01-23 09:51:48.944488188 +0000 UTC m=+1.488618559 container start 40d6f5cea28c249bb0f03a17087e400a6c8346db4d9649dde35505d30cfcdd4f (image=quay.io/ceph/ceph:v19, name=gallant_tesla, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 23 09:51:48 compute-0 podman[92178]: 2026-01-23 09:51:48.949092275 +0000 UTC m=+1.493222646 container attach 40d6f5cea28c249bb0f03a17087e400a6c8346db4d9649dde35505d30cfcdd4f (image=quay.io/ceph/ceph:v19, name=gallant_tesla, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:51:49 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 23 09:51:49 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 23 09:51:49 compute-0 podman[92307]: 2026-01-23 09:51:49.368603414 +0000 UTC m=+0.043755540 container create a8f4a9977bc832ca707b5ac6f3c42e3cc65f7409904f92af8c0228e79361375e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gould, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 09:51:49 compute-0 systemd[1]: Started libpod-conmon-a8f4a9977bc832ca707b5ac6f3c42e3cc65f7409904f92af8c0228e79361375e.scope.
Jan 23 09:51:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 23 09:51:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3560526778' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 23 09:51:49 compute-0 gallant_tesla[92219]: [client.openstack]
Jan 23 09:51:49 compute-0 gallant_tesla[92219]:         key = AQB8Q3NpAAAAABAATAj6yCl+1UaIO/yyy7nUXA==
Jan 23 09:51:49 compute-0 gallant_tesla[92219]:         caps mgr = "allow *"
Jan 23 09:51:49 compute-0 gallant_tesla[92219]:         caps mon = "profile rbd"
Jan 23 09:51:49 compute-0 gallant_tesla[92219]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 23 09:51:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:49 compute-0 podman[92307]: 2026-01-23 09:51:49.348835588 +0000 UTC m=+0.023987714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:51:49 compute-0 podman[92307]: 2026-01-23 09:51:49.447767786 +0000 UTC m=+0.122919932 container init a8f4a9977bc832ca707b5ac6f3c42e3cc65f7409904f92af8c0228e79361375e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gould, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:51:49 compute-0 podman[92307]: 2026-01-23 09:51:49.453906499 +0000 UTC m=+0.129058625 container start a8f4a9977bc832ca707b5ac6f3c42e3cc65f7409904f92af8c0228e79361375e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gould, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Jan 23 09:51:49 compute-0 systemd[1]: libpod-40d6f5cea28c249bb0f03a17087e400a6c8346db4d9649dde35505d30cfcdd4f.scope: Deactivated successfully.
Jan 23 09:51:49 compute-0 distracted_gould[92323]: 167 167
Jan 23 09:51:49 compute-0 systemd[1]: libpod-a8f4a9977bc832ca707b5ac6f3c42e3cc65f7409904f92af8c0228e79361375e.scope: Deactivated successfully.
Jan 23 09:51:49 compute-0 podman[92307]: 2026-01-23 09:51:49.458632272 +0000 UTC m=+0.133784428 container attach a8f4a9977bc832ca707b5ac6f3c42e3cc65f7409904f92af8c0228e79361375e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gould, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 09:51:49 compute-0 podman[92307]: 2026-01-23 09:51:49.459070945 +0000 UTC m=+0.134223081 container died a8f4a9977bc832ca707b5ac6f3c42e3cc65f7409904f92af8c0228e79361375e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gould, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:51:49 compute-0 podman[92178]: 2026-01-23 09:51:49.475707464 +0000 UTC m=+2.019837845 container died 40d6f5cea28c249bb0f03a17087e400a6c8346db4d9649dde35505d30cfcdd4f (image=quay.io/ceph/ceph:v19, name=gallant_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 23 09:51:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebdef050dc96e53f5ea6f965827c0206a0ab671226f7a672150594d2c322ac7c-merged.mount: Deactivated successfully.
Jan 23 09:51:49 compute-0 podman[92307]: 2026-01-23 09:51:49.564232003 +0000 UTC m=+0.239384129 container remove a8f4a9977bc832ca707b5ac6f3c42e3cc65f7409904f92af8c0228e79361375e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_gould, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 09:51:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-38ad846954426fc874d136b92f6163433ac7d66140d8c176a31c7de936bc6f7a-merged.mount: Deactivated successfully.
Jan 23 09:51:49 compute-0 ceph-mon[74335]: pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Jan 23 09:51:49 compute-0 ceph-mon[74335]: 7.10 scrub starts
Jan 23 09:51:49 compute-0 ceph-mon[74335]: 7.10 scrub ok
Jan 23 09:51:49 compute-0 ceph-mon[74335]: 6.1c scrub starts
Jan 23 09:51:49 compute-0 ceph-mon[74335]: 6.1c scrub ok
Jan 23 09:51:49 compute-0 ceph-mon[74335]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2edb8fa1-89ea-44cd-9b6e-9f4d89095397"}]': finished
Jan 23 09:51:49 compute-0 ceph-mon[74335]: osdmap e39: 3 total, 2 up, 3 in
Jan 23 09:51:49 compute-0 ceph-mon[74335]: 7.13 scrub starts
Jan 23 09:51:49 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:51:49 compute-0 ceph-mon[74335]: 7.13 scrub ok
Jan 23 09:51:49 compute-0 ceph-mon[74335]: 6.12 scrub starts
Jan 23 09:51:49 compute-0 ceph-mon[74335]: 6.12 scrub ok
Jan 23 09:51:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3212942412' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 23 09:51:49 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:49 compute-0 ceph-mon[74335]: pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3560526778' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 23 09:51:49 compute-0 podman[92178]: 2026-01-23 09:51:49.612285089 +0000 UTC m=+2.156415460 container remove 40d6f5cea28c249bb0f03a17087e400a6c8346db4d9649dde35505d30cfcdd4f (image=quay.io/ceph/ceph:v19, name=gallant_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:51:49 compute-0 systemd[1]: libpod-conmon-40d6f5cea28c249bb0f03a17087e400a6c8346db4d9649dde35505d30cfcdd4f.scope: Deactivated successfully.
Jan 23 09:51:49 compute-0 systemd[1]: libpod-conmon-a8f4a9977bc832ca707b5ac6f3c42e3cc65f7409904f92af8c0228e79361375e.scope: Deactivated successfully.
Jan 23 09:51:49 compute-0 sudo[92164]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:49 compute-0 podman[92361]: 2026-01-23 09:51:49.739266443 +0000 UTC m=+0.047209233 container create 4b07621c4f8b773712896a1411c49455fb879cc65f73a343789fe8573dd8a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:51:49 compute-0 systemd[1]: Started libpod-conmon-4b07621c4f8b773712896a1411c49455fb879cc65f73a343789fe8573dd8a2f7.scope.
Jan 23 09:51:49 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add808065c3b34fca94a570d8bc9cc56d51f6a7f851763b956f0c117faee94e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add808065c3b34fca94a570d8bc9cc56d51f6a7f851763b956f0c117faee94e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add808065c3b34fca94a570d8bc9cc56d51f6a7f851763b956f0c117faee94e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add808065c3b34fca94a570d8bc9cc56d51f6a7f851763b956f0c117faee94e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:49 compute-0 podman[92361]: 2026-01-23 09:51:49.717319404 +0000 UTC m=+0.025262214 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:51:49 compute-0 podman[92361]: 2026-01-23 09:51:49.827782492 +0000 UTC m=+0.135725302 container init 4b07621c4f8b773712896a1411c49455fb879cc65f73a343789fe8573dd8a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_ellis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:51:49 compute-0 podman[92361]: 2026-01-23 09:51:49.837141406 +0000 UTC m=+0.145084196 container start 4b07621c4f8b773712896a1411c49455fb879cc65f73a343789fe8573dd8a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 09:51:49 compute-0 podman[92361]: 2026-01-23 09:51:49.841274712 +0000 UTC m=+0.149217522 container attach 4b07621c4f8b773712896a1411c49455fb879cc65f73a343789fe8573dd8a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]: {
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:     "1": [
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:         {
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "devices": [
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "/dev/loop3"
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             ],
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "lv_name": "ceph_lv0",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "lv_size": "21470642176",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "name": "ceph_lv0",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "tags": {
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.cluster_name": "ceph",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.crush_device_class": "",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.encrypted": "0",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.osd_id": "1",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.type": "block",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.vdo": "0",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:                 "ceph.with_tpm": "0"
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             },
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "type": "block",
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:             "vg_name": "ceph_vg0"
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:         }
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]:     ]
Jan 23 09:51:50 compute-0 sleepy_ellis[92377]: }
Jan 23 09:51:50 compute-0 systemd[1]: libpod-4b07621c4f8b773712896a1411c49455fb879cc65f73a343789fe8573dd8a2f7.scope: Deactivated successfully.
Jan 23 09:51:50 compute-0 podman[92361]: 2026-01-23 09:51:50.20326333 +0000 UTC m=+0.511206120 container died 4b07621c4f8b773712896a1411c49455fb879cc65f73a343789fe8573dd8a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-add808065c3b34fca94a570d8bc9cc56d51f6a7f851763b956f0c117faee94e6-merged.mount: Deactivated successfully.
Jan 23 09:51:50 compute-0 podman[92361]: 2026-01-23 09:51:50.26563855 +0000 UTC m=+0.573581340 container remove 4b07621c4f8b773712896a1411c49455fb879cc65f73a343789fe8573dd8a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_ellis, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 09:51:50 compute-0 systemd[1]: libpod-conmon-4b07621c4f8b773712896a1411c49455fb879cc65f73a343789fe8573dd8a2f7.scope: Deactivated successfully.
Jan 23 09:51:50 compute-0 sudo[92218]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:50 compute-0 sudo[92398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:51:50 compute-0 sudo[92398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:50 compute-0 sudo[92398]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:50 compute-0 sudo[92423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 09:51:50 compute-0 sudo[92423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:51:50 compute-0 podman[92562]: 2026-01-23 09:51:50.837535191 +0000 UTC m=+0.029485093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:51:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v17: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:51 compute-0 sudo[92648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuhkfjvirovcloxsqhhylfmzdieknckq ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769161910.6477005-37686-89698124939399/async_wrapper.py j585695551175 30 /home/zuul/.ansible/tmp/ansible-tmp-1769161910.6477005-37686-89698124939399/AnsiballZ_command.py _'
Jan 23 09:51:51 compute-0 sudo[92648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:51 compute-0 ceph-mon[74335]: 7.1d scrub starts
Jan 23 09:51:51 compute-0 ceph-mon[74335]: 7.1d scrub ok
Jan 23 09:51:51 compute-0 ceph-mon[74335]: 6.17 scrub starts
Jan 23 09:51:51 compute-0 ceph-mon[74335]: 6.17 scrub ok
Jan 23 09:51:51 compute-0 podman[92562]: 2026-01-23 09:51:51.043635698 +0000 UTC m=+0.235585580 container create 7505f938d126125de830aefb44d849e4465d76f6cb4274f1a1e9f4d805733606 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 09:51:51 compute-0 systemd[1]: Started libpod-conmon-7505f938d126125de830aefb44d849e4465d76f6cb4274f1a1e9f4d805733606.scope.
Jan 23 09:51:51 compute-0 ansible-async_wrapper.py[92650]: Invoked with j585695551175 30 /home/zuul/.ansible/tmp/ansible-tmp-1769161910.6477005-37686-89698124939399/AnsiballZ_command.py _
Jan 23 09:51:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:51 compute-0 ansible-async_wrapper.py[92658]: Starting module and watcher
Jan 23 09:51:51 compute-0 ansible-async_wrapper.py[92658]: Start watching 92659 (30)
Jan 23 09:51:51 compute-0 ansible-async_wrapper.py[92659]: Start module (92659)
Jan 23 09:51:51 compute-0 ansible-async_wrapper.py[92650]: Return async_wrapper task started.
Jan 23 09:51:51 compute-0 sudo[92648]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:51 compute-0 podman[92562]: 2026-01-23 09:51:51.222177518 +0000 UTC m=+0.414127420 container init 7505f938d126125de830aefb44d849e4465d76f6cb4274f1a1e9f4d805733606 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 09:51:51 compute-0 podman[92562]: 2026-01-23 09:51:51.228581838 +0000 UTC m=+0.420531710 container start 7505f938d126125de830aefb44d849e4465d76f6cb4274f1a1e9f4d805733606 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_matsumoto, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:51:51 compute-0 elated_matsumoto[92653]: 167 167
Jan 23 09:51:51 compute-0 systemd[1]: libpod-7505f938d126125de830aefb44d849e4465d76f6cb4274f1a1e9f4d805733606.scope: Deactivated successfully.
Jan 23 09:51:51 compute-0 podman[92562]: 2026-01-23 09:51:51.247587595 +0000 UTC m=+0.439537497 container attach 7505f938d126125de830aefb44d849e4465d76f6cb4274f1a1e9f4d805733606 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:51 compute-0 podman[92562]: 2026-01-23 09:51:51.248065268 +0000 UTC m=+0.440015170 container died 7505f938d126125de830aefb44d849e4465d76f6cb4274f1a1e9f4d805733606 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_matsumoto, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:51:51 compute-0 python3[92660]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-baa6c198e15f67cc3eb35fc20724236bc380ada073e2aed51d8ec9a977acb6e2-merged.mount: Deactivated successfully.
Jan 23 09:51:51 compute-0 podman[92562]: 2026-01-23 09:51:51.380314901 +0000 UTC m=+0.572264783 container remove 7505f938d126125de830aefb44d849e4465d76f6cb4274f1a1e9f4d805733606 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_matsumoto, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 09:51:51 compute-0 systemd[1]: libpod-conmon-7505f938d126125de830aefb44d849e4465d76f6cb4274f1a1e9f4d805733606.scope: Deactivated successfully.
Jan 23 09:51:51 compute-0 podman[92677]: 2026-01-23 09:51:51.413088726 +0000 UTC m=+0.059103769 container create adcd0b7861065fb7ae4495c7e647cf61eac11c72db8a9ca3df4bcb13f390a764 (image=quay.io/ceph/ceph:v19, name=epic_spence, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 09:51:51 compute-0 systemd[1]: Started libpod-conmon-adcd0b7861065fb7ae4495c7e647cf61eac11c72db8a9ca3df4bcb13f390a764.scope.
Jan 23 09:51:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03248b1b350e073ca3a11349a0da2ec8e045e59dc1ef9b43b2a0ea1e7eb063e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03248b1b350e073ca3a11349a0da2ec8e045e59dc1ef9b43b2a0ea1e7eb063e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:51 compute-0 podman[92677]: 2026-01-23 09:51:51.391697812 +0000 UTC m=+0.037712875 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:51 compute-0 podman[92677]: 2026-01-23 09:51:51.489370619 +0000 UTC m=+0.135385662 container init adcd0b7861065fb7ae4495c7e647cf61eac11c72db8a9ca3df4bcb13f390a764 (image=quay.io/ceph/ceph:v19, name=epic_spence, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:51:51 compute-0 podman[92677]: 2026-01-23 09:51:51.495878683 +0000 UTC m=+0.141893716 container start adcd0b7861065fb7ae4495c7e647cf61eac11c72db8a9ca3df4bcb13f390a764 (image=quay.io/ceph/ceph:v19, name=epic_spence, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:51:51 compute-0 podman[92677]: 2026-01-23 09:51:51.499434463 +0000 UTC m=+0.145449506 container attach adcd0b7861065fb7ae4495c7e647cf61eac11c72db8a9ca3df4bcb13f390a764 (image=quay.io/ceph/ceph:v19, name=epic_spence, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:51:51 compute-0 podman[92702]: 2026-01-23 09:51:51.543726653 +0000 UTC m=+0.047473271 container create 7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bartik, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Jan 23 09:51:51 compute-0 systemd[1]: Started libpod-conmon-7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b.scope.
Jan 23 09:51:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb77550f83e7ae930035f5378a161de1815b151416a76cec161baf030ae10d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:51 compute-0 podman[92702]: 2026-01-23 09:51:51.519804418 +0000 UTC m=+0.023551056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb77550f83e7ae930035f5378a161de1815b151416a76cec161baf030ae10d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb77550f83e7ae930035f5378a161de1815b151416a76cec161baf030ae10d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb77550f83e7ae930035f5378a161de1815b151416a76cec161baf030ae10d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:51 compute-0 podman[92702]: 2026-01-23 09:51:51.633724544 +0000 UTC m=+0.137471182 container init 7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bartik, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:51:51 compute-0 podman[92702]: 2026-01-23 09:51:51.641414231 +0000 UTC m=+0.145160849 container start 7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:51:51 compute-0 podman[92702]: 2026-01-23 09:51:51.646737111 +0000 UTC m=+0.150483729 container attach 7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bartik, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 09:51:51 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14424 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:51:51 compute-0 epic_spence[92694]: 
Jan 23 09:51:51 compute-0 epic_spence[92694]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 09:51:51 compute-0 systemd[1]: libpod-adcd0b7861065fb7ae4495c7e647cf61eac11c72db8a9ca3df4bcb13f390a764.scope: Deactivated successfully.
Jan 23 09:51:51 compute-0 podman[92677]: 2026-01-23 09:51:51.919023646 +0000 UTC m=+0.565038689 container died adcd0b7861065fb7ae4495c7e647cf61eac11c72db8a9ca3df4bcb13f390a764 (image=quay.io/ceph/ceph:v19, name=epic_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Jan 23 09:51:52 compute-0 ceph-mon[74335]: 6.15 scrub starts
Jan 23 09:51:52 compute-0 ceph-mon[74335]: 6.15 scrub ok
Jan 23 09:51:52 compute-0 ceph-mon[74335]: pgmap v17: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e03248b1b350e073ca3a11349a0da2ec8e045e59dc1ef9b43b2a0ea1e7eb063e-merged.mount: Deactivated successfully.
Jan 23 09:51:52 compute-0 podman[92677]: 2026-01-23 09:51:52.092967086 +0000 UTC m=+0.738982129 container remove adcd0b7861065fb7ae4495c7e647cf61eac11c72db8a9ca3df4bcb13f390a764 (image=quay.io/ceph/ceph:v19, name=epic_spence, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 09:51:52 compute-0 systemd[1]: libpod-conmon-adcd0b7861065fb7ae4495c7e647cf61eac11c72db8a9ca3df4bcb13f390a764.scope: Deactivated successfully.
Jan 23 09:51:52 compute-0 ansible-async_wrapper.py[92659]: Module complete (92659)
Jan 23 09:51:52 compute-0 lvm[92851]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:51:52 compute-0 lvm[92851]: VG ceph_vg0 finished
Jan 23 09:51:52 compute-0 sudo[92873]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymlbndtjtuhknofjyqhaskefdqysekvn ; /usr/bin/python3'
Jan 23 09:51:52 compute-0 sudo[92873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:52 compute-0 mystifying_bartik[92719]: {}
Jan 23 09:51:52 compute-0 lvm[92877]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:51:52 compute-0 lvm[92877]: VG ceph_vg0 finished
Jan 23 09:51:52 compute-0 systemd[1]: libpod-7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b.scope: Deactivated successfully.
Jan 23 09:51:52 compute-0 systemd[1]: libpod-7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b.scope: Consumed 1.196s CPU time.
Jan 23 09:51:52 compute-0 podman[92702]: 2026-01-23 09:51:52.410741495 +0000 UTC m=+0.914488133 container died 7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bartik, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 09:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eb77550f83e7ae930035f5378a161de1815b151416a76cec161baf030ae10d5-merged.mount: Deactivated successfully.
Jan 23 09:51:52 compute-0 podman[92702]: 2026-01-23 09:51:52.458845323 +0000 UTC m=+0.962591941 container remove 7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 23 09:51:52 compute-0 systemd[1]: libpod-conmon-7293d97dacba68a9dd9de802d4be7535f11d76d326b21d2aca1cb826b8d6123b.scope: Deactivated successfully.
Jan 23 09:51:52 compute-0 sudo[92423]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:51:52 compute-0 python3[92878]: ansible-ansible.legacy.async_status Invoked with jid=j585695551175.92650 mode=status _async_dir=/root/.ansible_async
Jan 23 09:51:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:51:52 compute-0 sudo[92873]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:52 compute-0 sudo[92937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkssajoytsuyfbtanpaozgktgfxlfbon ; /usr/bin/python3'
Jan 23 09:51:52 compute-0 sudo[92937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:52 compute-0 python3[92939]: ansible-ansible.legacy.async_status Invoked with jid=j585695551175.92650 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 09:51:52 compute-0 sudo[92937]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v18: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:53 compute-0 ceph-mon[74335]: from='client.14424 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:51:53 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:53 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:51:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:53 compute-0 sudo[92963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nigxgussxjntovnnlwewbahvvnmueotg ; /usr/bin/python3'
Jan 23 09:51:53 compute-0 sudo[92963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:53 compute-0 python3[92965]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:53 compute-0 podman[92966]: 2026-01-23 09:51:53.583882747 +0000 UTC m=+0.028573617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:53 compute-0 podman[92966]: 2026-01-23 09:51:53.972229557 +0000 UTC m=+0.416920397 container create 34af31203cb16f6cfc521f0f1b7c93ef15820c5cbf1044673daa9924deb2b3d2 (image=quay.io/ceph/ceph:v19, name=determined_shamir, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:51:54 compute-0 systemd[1]: Started libpod-conmon-34af31203cb16f6cfc521f0f1b7c93ef15820c5cbf1044673daa9924deb2b3d2.scope.
Jan 23 09:51:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1503bee94a28ca0b7eeb183cb4471ac458da5a72cefbea745bf437e5297ecc4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1503bee94a28ca0b7eeb183cb4471ac458da5a72cefbea745bf437e5297ecc4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:54 compute-0 ceph-mon[74335]: pgmap v18: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:54 compute-0 podman[92966]: 2026-01-23 09:51:54.541706171 +0000 UTC m=+0.986397031 container init 34af31203cb16f6cfc521f0f1b7c93ef15820c5cbf1044673daa9924deb2b3d2 (image=quay.io/ceph/ceph:v19, name=determined_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:51:54 compute-0 podman[92966]: 2026-01-23 09:51:54.54982899 +0000 UTC m=+0.994519860 container start 34af31203cb16f6cfc521f0f1b7c93ef15820c5cbf1044673daa9924deb2b3d2 (image=quay.io/ceph/ceph:v19, name=determined_shamir, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 09:51:54 compute-0 podman[92966]: 2026-01-23 09:51:54.650650255 +0000 UTC m=+1.095341095 container attach 34af31203cb16f6cfc521f0f1b7c93ef15820c5cbf1044673daa9924deb2b3d2 (image=quay.io/ceph/ceph:v19, name=determined_shamir, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 09:51:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v19: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:54 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:51:54 compute-0 determined_shamir[92983]: 
Jan 23 09:51:54 compute-0 determined_shamir[92983]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 09:51:54 compute-0 systemd[1]: libpod-34af31203cb16f6cfc521f0f1b7c93ef15820c5cbf1044673daa9924deb2b3d2.scope: Deactivated successfully.
Jan 23 09:51:54 compute-0 podman[92966]: 2026-01-23 09:51:54.985840796 +0000 UTC m=+1.430531646 container died 34af31203cb16f6cfc521f0f1b7c93ef15820c5cbf1044673daa9924deb2b3d2 (image=quay.io/ceph/ceph:v19, name=determined_shamir, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 09:51:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 23 09:51:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 23 09:51:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:51:55 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:55 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 23 09:51:55 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 23 09:51:56 compute-0 ansible-async_wrapper.py[92658]: Done in kid B.
Jan 23 09:51:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1503bee94a28ca0b7eeb183cb4471ac458da5a72cefbea745bf437e5297ecc4-merged.mount: Deactivated successfully.
Jan 23 09:51:56 compute-0 ceph-mon[74335]: pgmap v19: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:56 compute-0 ceph-mon[74335]: from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:51:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 23 09:51:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:51:56 compute-0 ceph-mon[74335]: Deploying daemon osd.2 on compute-2
Jan 23 09:51:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v20: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:57 compute-0 podman[92966]: 2026-01-23 09:51:57.361636509 +0000 UTC m=+3.806327349 container remove 34af31203cb16f6cfc521f0f1b7c93ef15820c5cbf1044673daa9924deb2b3d2 (image=quay.io/ceph/ceph:v19, name=determined_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 09:51:57 compute-0 sudo[92963]: pam_unix(sudo:session): session closed for user root
Jan 23 09:51:57 compute-0 systemd[1]: libpod-conmon-34af31203cb16f6cfc521f0f1b7c93ef15820c5cbf1044673daa9924deb2b3d2.scope: Deactivated successfully.
Jan 23 09:51:57 compute-0 ceph-mon[74335]: pgmap v20: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:58 compute-0 sudo[93042]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnfkhnwaalweavgoezgjgnnirdtffqyp ; /usr/bin/python3'
Jan 23 09:51:58 compute-0 sudo[93042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:51:58 compute-0 python3[93044]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:51:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:51:58 compute-0 podman[93045]: 2026-01-23 09:51:58.353836901 +0000 UTC m=+0.022375479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:51:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v21: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:51:59 compute-0 podman[93045]: 2026-01-23 09:51:59.037007354 +0000 UTC m=+0.705545942 container create a89070972084e0eb4d75546b2dd3af0b93dfa9549a180920aa4148c65bc05f87 (image=quay.io/ceph/ceph:v19, name=nostalgic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:51:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:51:59 compute-0 systemd[1]: Started libpod-conmon-a89070972084e0eb4d75546b2dd3af0b93dfa9549a180920aa4148c65bc05f87.scope.
Jan 23 09:51:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fa42b0c0e7ddd2cbe869822d82e19244e1ec71f6da32257e967c1c90bbf83a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fa42b0c0e7ddd2cbe869822d82e19244e1ec71f6da32257e967c1c90bbf83a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:00 compute-0 podman[93045]: 2026-01-23 09:52:00.308117688 +0000 UTC m=+1.976656276 container init a89070972084e0eb4d75546b2dd3af0b93dfa9549a180920aa4148c65bc05f87 (image=quay.io/ceph/ceph:v19, name=nostalgic_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 09:52:00 compute-0 podman[93045]: 2026-01-23 09:52:00.31371919 +0000 UTC m=+1.982257748 container start a89070972084e0eb4d75546b2dd3af0b93dfa9549a180920aa4148c65bc05f87 (image=quay.io/ceph/ceph:v19, name=nostalgic_shaw, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:52:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:52:00 compute-0 nostalgic_shaw[93060]: 
Jan 23 09:52:00 compute-0 nostalgic_shaw[93060]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 23 09:52:00 compute-0 systemd[1]: libpod-a89070972084e0eb4d75546b2dd3af0b93dfa9549a180920aa4148c65bc05f87.scope: Deactivated successfully.
Jan 23 09:52:00 compute-0 ceph-mon[74335]: pgmap v21: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:00 compute-0 podman[93045]: 2026-01-23 09:52:00.763880182 +0000 UTC m=+2.432418760 container attach a89070972084e0eb4d75546b2dd3af0b93dfa9549a180920aa4148c65bc05f87 (image=quay.io/ceph/ceph:v19, name=nostalgic_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:52:00 compute-0 podman[93045]: 2026-01-23 09:52:00.764459208 +0000 UTC m=+2.432997766 container died a89070972084e0eb4d75546b2dd3af0b93dfa9549a180920aa4148c65bc05f87 (image=quay.io/ceph/ceph:v19, name=nostalgic_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:52:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:52:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v22: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:01 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-35fa42b0c0e7ddd2cbe869822d82e19244e1ec71f6da32257e967c1c90bbf83a-merged.mount: Deactivated successfully.
Jan 23 09:52:02 compute-0 ceph-mon[74335]: from='client.14439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:52:02 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:02 compute-0 ceph-mon[74335]: pgmap v22: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:02 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v23: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:52:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:52:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:52:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:52:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:52:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:52:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 23 09:52:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 23 09:52:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:52:03 compute-0 podman[93045]: 2026-01-23 09:52:03.378128709 +0000 UTC m=+5.046667267 container remove a89070972084e0eb4d75546b2dd3af0b93dfa9549a180920aa4148c65bc05f87 (image=quay.io/ceph/ceph:v19, name=nostalgic_shaw, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:52:03 compute-0 sudo[93042]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:03 compute-0 systemd[1]: libpod-conmon-a89070972084e0eb4d75546b2dd3af0b93dfa9549a180920aa4148c65bc05f87.scope: Deactivated successfully.
Jan 23 09:52:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 23 09:52:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:52:03 compute-0 ceph-mon[74335]: from='osd.2 [v2:192.168.122.102:6800/1020282776,v1:192.168.122.102:6801/1020282776]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 23 09:52:03 compute-0 ceph-mon[74335]: pgmap v23: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:03 compute-0 ceph-mon[74335]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 23 09:52:04 compute-0 sudo[93119]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsgfaobqhrkvwfirgnfnbtwcbpgyorsd ; /usr/bin/python3'
Jan 23 09:52:04 compute-0 sudo[93119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:52:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 23 09:52:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e40 e40: 3 total, 2 up, 3 in
Jan 23 09:52:04 compute-0 python3[93121]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:52:04 compute-0 podman[93122]: 2026-01-23 09:52:04.440852231 +0000 UTC m=+0.024244681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:52:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:04 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 2 up, 3 in
Jan 23 09:52:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v25: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:05 compute-0 podman[93122]: 2026-01-23 09:52:05.013022132 +0000 UTC m=+0.596414552 container create 9c84c8322799a9c9476270378ecc38f7d23d5bc5fc037136c1c708a46d7e3652 (image=quay.io/ceph/ceph:v19, name=upbeat_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 09:52:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Jan 23 09:52:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 23 09:52:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e40 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Jan 23 09:52:05 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:52:05 compute-0 systemd[1]: Started libpod-conmon-9c84c8322799a9c9476270378ecc38f7d23d5bc5fc037136c1c708a46d7e3652.scope.
Jan 23 09:52:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 23 09:52:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1945a8a109760168a3d92276055e7f4bcedae589065ced3c3a27555c17b8fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1945a8a109760168a3d92276055e7f4bcedae589065ced3c3a27555c17b8fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:05 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 4a82989e-3b7f-4370-8672-3e09753c7f87 (Updating rgw.rgw deployment (+3 -> 3))
Jan 23 09:52:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.yzflfx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 23 09:52:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.yzflfx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:06 compute-0 podman[93122]: 2026-01-23 09:52:06.633700339 +0000 UTC m=+2.217092789 container init 9c84c8322799a9c9476270378ecc38f7d23d5bc5fc037136c1c708a46d7e3652 (image=quay.io/ceph/ceph:v19, name=upbeat_jepsen, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 09:52:06 compute-0 podman[93122]: 2026-01-23 09:52:06.639561779 +0000 UTC m=+2.222954209 container start 9c84c8322799a9c9476270378ecc38f7d23d5bc5fc037136c1c708a46d7e3652 (image=quay.io/ceph/ceph:v19, name=upbeat_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:52:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v26: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:06 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:52:06 compute-0 upbeat_jepsen[93137]: 
Jan 23 09:52:06 compute-0 upbeat_jepsen[93137]: [{"container_id": "ae2342c943dc", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.09%", "created": "2026-01-23T09:48:36.807803Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T09:51:35.463159Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2026-01-23T09:48:36.708924Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@crash.compute-0", "version": "19.2.3"}, {"container_id": "0d5b0e98337a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.29%", "created": "2026-01-23T09:49:24.320059Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-23T09:51:35.281877Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2026-01-23T09:49:24.178040Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@crash.compute-1", "version": "19.2.3"}, {"container_id": "044486c85d2f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.44%", "created": "2026-01-23T09:50:58.481098Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-23T09:51:35.066566Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2026-01-23T09:50:58.385499Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@crash.compute-2", "version": "19.2.3"}, {"container_id": "e4a1c45f747e", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "24.90%", "created": "2026-01-23T09:47:45.696307Z", "daemon_id": "compute-0.nbdygh", "daemon_name": "mgr.compute-0.nbdygh", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T09:51:35.463087Z", "memory_usage": 540331212, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-23T09:47:45.605973Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@mgr.compute-0.nbdygh", "version": "19.2.3"}, {"container_id": "c38fbb9e0518", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "68.42%", "created": "2026-01-23T09:50:56.805213Z", "daemon_id": "compute-1.jmakme", "daemon_name": "mgr.compute-1.jmakme", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-23T09:51:35.282175Z", "memory_usage": 503840768, "ports": [8765], "service_name": "mgr", "started": "2026-01-23T09:50:56.657007Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@mgr.compute-1.jmakme", "version": "19.2.3"}, {"container_id": "493e3a3dda77", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "73.09%", "created": "2026-01-23T09:50:54.545043Z", "daemon_id": "compute-2.uczrot", "daemon_name": "mgr.compute-2.uczrot", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-23T09:51:35.066473Z", "memory_usage": 504574771, "ports": [8765], "service_name": "mgr", "started": "2026-01-23T09:50:54.457689Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@mgr.compute-2.uczrot", "version": "19.2.3"}, {"container_id": "cbfd7f9a2ad9", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.18%", "created": "2026-01-23T09:47:38.231089Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T09:51:35.462981Z", "memory_request": 2147483648, "memory_usage": 56518246, "ports": [], "service_name": "mon", "started": "2026-01-23T09:47:41.751004Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@mon.compute-0", "version": "19.2.3"}, {"container_id": "c1579b7599b2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.72%", "created": "2026-01-23T09:50:47.386898Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-23T09:51:35.282103Z", "memory_request": 2147483648, "memory_usage": 42540728, "ports": [], "service_name": "mon", "started": "2026-01-23T09:50:47.292107Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@mon.compute-1", "version": "19.2.3"}, {"container_id": "40f46bf40aa2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.77%", "created": "2026-01-23T09:50:40.071757Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-23T09:51:35.066307Z", "memory_request": 2147483648, "memory_usage": 41481666, "ports": [], "service_name": "mon", "started": "2026-01-23T09:50:39.962092Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@mon.compute-2", "version": "19.2.3"}, {"container_id": "97848d12ab63", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80", "quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.13%", "created": "2026-01-23T09:51:16.573959Z", "daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T09:51:35.463299Z", "memory_usage": 4016046, "ports": [9100], "service_name": "node-exporter", "started": "2026-01-23T09:51:16.482288Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@node-exporter.compute-0", "version": "1.7.0"}, {"container_id": "965059b66041", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80", "quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.16%", "created": "2026-01-23T09:51:19.423948Z", "daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-23T09:51:35.282244Z", "memory_usage": 3698327, "ports": [9100], "service_name": "node-exporter", "started": "2026-01-23T09:51:19.315847Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@node-exporter.compute-1", "version": "1.7.0"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2026-01-23T09:51:44.598237Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "ba38de352265", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.86%", "created": "2026-01-23T09:49:39.821539Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T09:51:35.463230Z", "memory_request": 4294967296, "memory_usage": 73903636, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-23T09:49:38.998953Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@osd.1", "version": "19.2.3"}, {"container_id": "70bc56e6e481", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.05%", "created": "2026-01-23T09:49:45.244727Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-23T09:51:35.282030Z", "memory_request": 4294967296, "memory_usage": 70831308, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-23T09:49:45.044939Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f3005f84-239a-55b6-a948-8f1fb592b920@osd.0", "version": "19.2.3"}, {"daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-23T09:52:01.566601Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "memory_request": 4294967296, "ports": [], "service_name": "osd.default_drive_group", "status": 2, "status_desc": "starting"}]
Jan 23 09:52:07 compute-0 systemd[1]: libpod-9c84c8322799a9c9476270378ecc38f7d23d5bc5fc037136c1c708a46d7e3652.scope: Deactivated successfully.
Jan 23 09:52:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 23 09:52:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e41 e41: 3 total, 2 up, 3 in
Jan 23 09:52:07 compute-0 rsyslogd[1003]: message too long (12389) with configured size 8096, begin of message is: [{"container_id": "ae2342c943dc", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 09:52:07 compute-0 podman[93122]: 2026-01-23 09:52:07.254460815 +0000 UTC m=+2.837853265 container attach 9c84c8322799a9c9476270378ecc38f7d23d5bc5fc037136c1c708a46d7e3652 (image=quay.io/ceph/ceph:v19, name=upbeat_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 09:52:07 compute-0 ceph-mon[74335]: purged_snaps scrub starts
Jan 23 09:52:07 compute-0 ceph-mon[74335]: purged_snaps scrub ok
Jan 23 09:52:07 compute-0 ceph-mon[74335]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 23 09:52:07 compute-0 ceph-mon[74335]: from='osd.2 [v2:192.168.122.102:6800/1020282776,v1:192.168.122.102:6801/1020282776]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 23 09:52:07 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:07 compute-0 ceph-mon[74335]: osdmap e40: 3 total, 2 up, 3 in
Jan 23 09:52:07 compute-0 ceph-mon[74335]: pgmap v25: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:07 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:07 compute-0 ceph-mon[74335]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 23 09:52:07 compute-0 podman[93122]: 2026-01-23 09:52:07.255968576 +0000 UTC m=+2.839361006 container died 9c84c8322799a9c9476270378ecc38f7d23d5bc5fc037136c1c708a46d7e3652 (image=quay.io/ceph/ceph:v19, name=upbeat_jepsen, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:52:07 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:07 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 2 up, 3 in
Jan 23 09:52:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:07 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:07 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.yzflfx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 23 09:52:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd1945a8a109760168a3d92276055e7f4bcedae589065ced3c3a27555c17b8fd-merged.mount: Deactivated successfully.
Jan 23 09:52:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:07 compute-0 podman[93122]: 2026-01-23 09:52:07.35463735 +0000 UTC m=+2.938029780 container remove 9c84c8322799a9c9476270378ecc38f7d23d5bc5fc037136c1c708a46d7e3652 (image=quay.io/ceph/ceph:v19, name=upbeat_jepsen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 09:52:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:07 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:07 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.yzflfx on compute-2
Jan 23 09:52:07 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.yzflfx on compute-2
Jan 23 09:52:07 compute-0 sudo[93119]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:07 compute-0 systemd[1]: libpod-conmon-9c84c8322799a9c9476270378ecc38f7d23d5bc5fc037136c1c708a46d7e3652.scope: Deactivated successfully.
Jan 23 09:52:08 compute-0 sudo[93197]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqstokrlxbzhxeklcmbpsioxhpyetmdi ; /usr/bin/python3'
Jan 23 09:52:08 compute-0 sudo[93197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:52:08 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:08 compute-0 python3[93199]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:52:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:08 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:52:08 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:08 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.yzflfx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:08 compute-0 ceph-mon[74335]: pgmap v26: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:08 compute-0 ceph-mon[74335]: from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 09:52:08 compute-0 ceph-mon[74335]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 23 09:52:08 compute-0 ceph-mon[74335]: osdmap e41: 3 total, 2 up, 3 in
Jan 23 09:52:08 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:08 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.yzflfx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:08 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:08 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:08 compute-0 ceph-mon[74335]: Deploying daemon rgw.rgw.compute-2.yzflfx on compute-2
Jan 23 09:52:08 compute-0 podman[93200]: 2026-01-23 09:52:08.369285593 +0000 UTC m=+0.028625270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:52:08 compute-0 podman[93200]: 2026-01-23 09:52:08.545821287 +0000 UTC m=+0.205160954 container create 0f993124219bdaa27ec789e09804eb53a65fae0e59d99ea7bc837a046ad0b644 (image=quay.io/ceph/ceph:v19, name=elated_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:52:08 compute-0 systemd[1]: Started libpod-conmon-0f993124219bdaa27ec789e09804eb53a65fae0e59d99ea7bc837a046ad0b644.scope.
Jan 23 09:52:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528a9d683be828b2a0090aec0250d534fc94c2db392ecc8e4364f775e3afb26b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528a9d683be828b2a0090aec0250d534fc94c2db392ecc8e4364f775e3afb26b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v28: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:09 compute-0 podman[93200]: 2026-01-23 09:52:09.028257257 +0000 UTC m=+0.687596944 container init 0f993124219bdaa27ec789e09804eb53a65fae0e59d99ea7bc837a046ad0b644 (image=quay.io/ceph/ceph:v19, name=elated_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 23 09:52:09 compute-0 podman[93200]: 2026-01-23 09:52:09.035471624 +0000 UTC m=+0.694811281 container start 0f993124219bdaa27ec789e09804eb53a65fae0e59d99ea7bc837a046ad0b644 (image=quay.io/ceph/ceph:v19, name=elated_saha, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:52:09 compute-0 podman[93200]: 2026-01-23 09:52:09.042604988 +0000 UTC m=+0.701944675 container attach 0f993124219bdaa27ec789e09804eb53a65fae0e59d99ea7bc837a046ad0b644 (image=quay.io/ceph/ceph:v19, name=elated_saha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 09:52:09 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:09 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 23 09:52:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/237302038' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:52:09 compute-0 elated_saha[93216]: 
Jan 23 09:52:09 compute-0 elated_saha[93216]: {"fsid":"f3005f84-239a-55b6-a948-8f1fb592b920","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":76,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":2,"osd_up_since":1769161804,"num_in_osds":3,"osd_in_since":1769161907,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194}],"num_pgs":194,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":56266752,"bytes_avail":42885017600,"bytes_total":42941284352},"fsmap":{"epoch":2,"btime":"2026-01-23T09:51:34:000852+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":3,"modified":"2026-01-23T09:50:57.514849+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"4a82989e-3b7f-4370-8672-3e09753c7f87":{"message":"Updating rgw.rgw deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 23 09:52:09 compute-0 systemd[1]: libpod-0f993124219bdaa27ec789e09804eb53a65fae0e59d99ea7bc837a046ad0b644.scope: Deactivated successfully.
Jan 23 09:52:09 compute-0 podman[93200]: 2026-01-23 09:52:09.659681472 +0000 UTC m=+1.319021129 container died 0f993124219bdaa27ec789e09804eb53a65fae0e59d99ea7bc837a046ad0b644 (image=quay.io/ceph/ceph:v19, name=elated_saha, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:52:09 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:09 compute-0 ceph-mon[74335]: pgmap v28: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:09 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.204210281s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180847168s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.204210281s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180847168s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.203755379s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180862427s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.203755379s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180862427s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[6.1b( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.048263550s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 161.025497437s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.203631401s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180877686s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.203631401s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180877686s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.548350334s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.525665283s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.548350334s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.525665283s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[6.1b( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.048263550s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025497437s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202979088s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180511475s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202979088s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180511475s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202818871s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180450439s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202818871s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180450439s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.047470093s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 161.025238037s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.047470093s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025238037s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202747345s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180603027s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202155113s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180221558s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202369690s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180450439s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202155113s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180221558s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=24/25 n=0 ec=14/14 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202161789s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180435181s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.0( empty local-lis/les=26/29 n=0 ec=16/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.047054291s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 161.025344849s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=24/25 n=0 ec=14/14 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202161789s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180435181s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202747345s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180603027s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.202369690s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180450439s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.0( empty local-lis/les=26/29 n=0 ec=16/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.047054291s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025344849s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547570229s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.526062012s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.d( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.047237396s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 161.025741577s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547494888s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.526016235s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.d( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.047237396s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025741577s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547494888s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526016235s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547570229s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526062012s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547606468s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.526367188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.b( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.046749115s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 161.025558472s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547606468s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526367188s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.c( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547245979s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.526046753s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.b( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.046749115s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025558472s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.c( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547245979s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526046753s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547745705s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.526596069s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547745705s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526596069s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.10( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547472000s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.526458740s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.10( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547472000s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526458740s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.13( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547232628s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.526306152s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.201071739s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 159.180175781s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.13( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547232628s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526306152s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547451019s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.526580811s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=11.201071739s) [] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180175781s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547451019s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526580811s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.046650887s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 161.025833130s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.046650887s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025833130s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.8( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.046287537s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 161.025527954s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.046434402s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 161.025756836s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.046434402s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025756836s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547143936s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 active pruub 158.526580811s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=10.547143936s) [] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526580811s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 41 pg[5.8( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=41 pruub=13.046287537s) [] r=-1 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025527954s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-528a9d683be828b2a0090aec0250d534fc94c2db392ecc8e4364f775e3afb26b-merged.mount: Deactivated successfully.
Jan 23 09:52:10 compute-0 podman[93200]: 2026-01-23 09:52:10.221928134 +0000 UTC m=+1.881267791 container remove 0f993124219bdaa27ec789e09804eb53a65fae0e59d99ea7bc837a046ad0b644 (image=quay.io/ceph/ceph:v19, name=elated_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:52:10 compute-0 systemd[1]: libpod-conmon-0f993124219bdaa27ec789e09804eb53a65fae0e59d99ea7bc837a046ad0b644.scope: Deactivated successfully.
Jan 23 09:52:10 compute-0 sudo[93197]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:52:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:52:10 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:10 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 09:52:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.syfcuk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 23 09:52:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.syfcuk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.syfcuk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 23 09:52:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:10 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.syfcuk on compute-1
Jan 23 09:52:10 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.syfcuk on compute-1
Jan 23 09:52:11 compute-0 sudo[93278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrscpevhtrgsutggwnoxsblwtfvzvjtk ; /usr/bin/python3'
Jan 23 09:52:11 compute-0 sudo[93278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:52:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v29: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/237302038' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 09:52:11 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:11 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:11 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:11 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:11 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.syfcuk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:11 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.syfcuk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:11 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:11 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:11 compute-0 python3[93280]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:52:11 compute-0 podman[93281]: 2026-01-23 09:52:11.208718229 +0000 UTC m=+0.026013519 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:52:11 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 23 09:52:11 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:11 compute-0 podman[93281]: 2026-01-23 09:52:11.867388875 +0000 UTC m=+0.684684144 container create 300c01d6e0d1d3dae5676aecfc1cda97296fb74d29d26449cd0a259584b20346 (image=quay.io/ceph/ceph:v19, name=flamboyant_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:52:12 compute-0 systemd[1]: Started libpod-conmon-300c01d6e0d1d3dae5676aecfc1cda97296fb74d29d26449cd0a259584b20346.scope.
Jan 23 09:52:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45f074a147f6f89401c487219a6280c4ce429a72ea163a6c94c9dc244d5b79e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45f074a147f6f89401c487219a6280c4ce429a72ea163a6c94c9dc244d5b79e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:12 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:12 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:12 compute-0 podman[93281]: 2026-01-23 09:52:12.398579481 +0000 UTC m=+1.215874760 container init 300c01d6e0d1d3dae5676aecfc1cda97296fb74d29d26449cd0a259584b20346 (image=quay.io/ceph/ceph:v19, name=flamboyant_kare, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:52:12 compute-0 podman[93281]: 2026-01-23 09:52:12.403558837 +0000 UTC m=+1.220854106 container start 300c01d6e0d1d3dae5676aecfc1cda97296fb74d29d26449cd0a259584b20346 (image=quay.io/ceph/ceph:v19, name=flamboyant_kare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Jan 23 09:52:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:52:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 23 09:52:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2988268721' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:52:12 compute-0 flamboyant_kare[93296]: 
Jan 23 09:52:12 compute-0 podman[93281]: 2026-01-23 09:52:12.775842969 +0000 UTC m=+1.593138258 container attach 300c01d6e0d1d3dae5676aecfc1cda97296fb74d29d26449cd0a259584b20346 (image=quay.io/ceph/ceph:v19, name=flamboyant_kare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 09:52:12 compute-0 flamboyant_kare[93296]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard//server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.nbdygh/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-1.syfcuk","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.yzflfx","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 23 09:52:12 compute-0 systemd[1]: libpod-300c01d6e0d1d3dae5676aecfc1cda97296fb74d29d26449cd0a259584b20346.scope: Deactivated successfully.
Jan 23 09:52:12 compute-0 podman[93281]: 2026-01-23 09:52:12.77919366 +0000 UTC m=+1.596488929 container died 300c01d6e0d1d3dae5676aecfc1cda97296fb74d29d26449cd0a259584b20346 (image=quay.io/ceph/ceph:v19, name=flamboyant_kare, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:52:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v30: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:13 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:13 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:13 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:14 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:14 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e42 e42: 3 total, 2 up, 3 in
Jan 23 09:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-45f074a147f6f89401c487219a6280c4ce429a72ea163a6c94c9dc244d5b79e3-merged.mount: Deactivated successfully.
Jan 23 09:52:14 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 2 up, 3 in
Jan 23 09:52:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v32: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 23 09:52:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 23 09:52:15 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:15 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:15 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:15 compute-0 ceph-mon[74335]: Deploying daemon rgw.rgw.compute-1.syfcuk on compute-1
Jan 23 09:52:15 compute-0 ceph-mon[74335]: pgmap v29: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:15 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:15 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 42 pg[9.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:15 compute-0 podman[93281]: 2026-01-23 09:52:15.688660673 +0000 UTC m=+4.505955942 container remove 300c01d6e0d1d3dae5676aecfc1cda97296fb74d29d26449cd0a259584b20346 (image=quay.io/ceph/ceph:v19, name=flamboyant_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 09:52:15 compute-0 sudo[93278]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:15 compute-0 systemd[1]: libpod-conmon-300c01d6e0d1d3dae5676aecfc1cda97296fb74d29d26449cd0a259584b20346.scope: Deactivated successfully.
Jan 23 09:52:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:52:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 23 09:52:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 09:52:16 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 23 09:52:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e43 e43: 3 total, 2 up, 3 in
Jan 23 09:52:16 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 2 up, 3 in
Jan 23 09:52:16 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:16 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:16 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:16 compute-0 sudo[93357]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulpsidwyfolbtaqxeoebfoawmpqniuvj ; /usr/bin/python3'
Jan 23 09:52:16 compute-0 sudo[93357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:52:16 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:52:16 compute-0 python3[93359]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:52:16 compute-0 podman[93360]: 2026-01-23 09:52:16.763945827 +0000 UTC m=+0.025224787 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:52:16 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 43 pg[9.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:16 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:16 compute-0 podman[93360]: 2026-01-23 09:52:16.882104683 +0000 UTC m=+0.143383613 container create 6d06691ddc2e2d28e456d8412595c36c2056e88920b803e43361c2cf01124b9b (image=quay.io/ceph/ceph:v19, name=nice_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 09:52:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jbpfwf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 23 09:52:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jbpfwf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v34: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:17 compute-0 systemd[1]: Started libpod-conmon-6d06691ddc2e2d28e456d8412595c36c2056e88920b803e43361c2cf01124b9b.scope.
Jan 23 09:52:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1f7e2a688089188d7a19d4e7850566e715f812b858891880a5d7b0428eae7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1f7e2a688089188d7a19d4e7850566e715f812b858891880a5d7b0428eae7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2988268721' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 09:52:17 compute-0 ceph-mon[74335]: pgmap v30: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:17 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:17 compute-0 ceph-mon[74335]: osdmap e42: 3 total, 2 up, 3 in
Jan 23 09:52:17 compute-0 ceph-mon[74335]: pgmap v32: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2692084146' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 23 09:52:17 compute-0 ceph-mon[74335]: osdmap e43: 3 total, 2 up, 3 in
Jan 23 09:52:17 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:17 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:17 compute-0 podman[93360]: 2026-01-23 09:52:17.815848206 +0000 UTC m=+1.077127166 container init 6d06691ddc2e2d28e456d8412595c36c2056e88920b803e43361c2cf01124b9b (image=quay.io/ceph/ceph:v19, name=nice_tesla, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 23 09:52:17 compute-0 podman[93360]: 2026-01-23 09:52:17.82300792 +0000 UTC m=+1.084286850 container start 6d06691ddc2e2d28e456d8412595c36c2056e88920b803e43361c2cf01124b9b (image=quay.io/ceph/ceph:v19, name=nice_tesla, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:52:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 23 09:52:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:18 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:18 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:18 compute-0 podman[93360]: 2026-01-23 09:52:18.150508643 +0000 UTC m=+1.411787593 container attach 6d06691ddc2e2d28e456d8412595c36c2056e88920b803e43361c2cf01124b9b (image=quay.io/ceph/ceph:v19, name=nice_tesla, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 09:52:18 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jbpfwf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:18 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 23 09:52:18 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421940163' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 23 09:52:18 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:52:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:18 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 23 09:52:18 compute-0 nice_tesla[93375]: mimic
Jan 23 09:52:18 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:18 compute-0 systemd[1]: libpod-6d06691ddc2e2d28e456d8412595c36c2056e88920b803e43361c2cf01124b9b.scope: Deactivated successfully.
Jan 23 09:52:18 compute-0 podman[93360]: 2026-01-23 09:52:18.508988469 +0000 UTC m=+1.770267409 container died 6d06691ddc2e2d28e456d8412595c36c2056e88920b803e43361c2cf01124b9b (image=quay.io/ceph/ceph:v19, name=nice_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:52:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e44 e44: 3 total, 2 up, 3 in
Jan 23 09:52:18 compute-0 ceph-mgr[74633]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 23 09:52:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v35: 195 pgs: 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Jan 23 09:52:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 2 up, 3 in
Jan 23 09:52:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:19 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:19 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1020282776; not ready for session (expect reconnect)
Jan 23 09:52:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a1f7e2a688089188d7a19d4e7850566e715f812b858891880a5d7b0428eae7d-merged.mount: Deactivated successfully.
Jan 23 09:52:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:19 compute-0 ceph-mgr[74633]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 09:52:19 compute-0 ceph-mon[74335]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:52:19 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:19 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jbpfwf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:19 compute-0 ceph-mon[74335]: pgmap v34: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 23 09:52:19 compute-0 ceph-mon[74335]: OSD bench result of 2077.197482 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 09:52:19 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:19 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jbpfwf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1421940163' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 23 09:52:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 23 09:52:19 compute-0 podman[93360]: 2026-01-23 09:52:19.657104135 +0000 UTC m=+2.918383065 container remove 6d06691ddc2e2d28e456d8412595c36c2056e88920b803e43361c2cf01124b9b (image=quay.io/ceph/ceph:v19, name=nice_tesla, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 09:52:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:19 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.jbpfwf on compute-0
Jan 23 09:52:19 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.jbpfwf on compute-0
Jan 23 09:52:19 compute-0 sudo[93357]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:19 compute-0 systemd[1]: libpod-conmon-6d06691ddc2e2d28e456d8412595c36c2056e88920b803e43361c2cf01124b9b.scope: Deactivated successfully.
Jan 23 09:52:19 compute-0 sudo[93415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:52:19 compute-0 sudo[93415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:52:19 compute-0 sudo[93415]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:19 compute-0 sudo[93440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:52:19 compute-0 sudo[93440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:52:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 23 09:52:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1020282776,v1:192.168.122.102:6801/1020282776] boot
Jan 23 09:52:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 23 09:52:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:52:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:20 compute-0 podman[93506]: 2026-01-23 09:52:20.148564171 +0000 UTC m=+0.025435034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.731274843s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180847168s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[6.1b( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.575895071s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025497437s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.731201828s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180847168s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[6.1b( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.575552225s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025497437s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.730781078s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180862427s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.730778098s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180877686s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.730752885s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180877686s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.730150163s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180511475s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.730241537s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180603027s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.075295590s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.525665283s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.730221927s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180603027s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.075278915s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.525665283s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.730096757s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180511475s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/14 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.730754614s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180862427s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.729586005s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180221558s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.729570866s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180221558s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.729709268s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180450439s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.729693949s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180450439s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.0( empty local-lis/les=26/29 n=0 ec=16/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.574469805s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025344849s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.0( empty local-lis/les=26/29 n=0 ec=16/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.574454308s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025344849s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=24/25 n=0 ec=14/14 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.729433477s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180435181s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.574170589s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025238037s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=24/25 n=0 ec=14/14 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.729382575s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180435181s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=26/29 n=0 ec=26/17 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.574142218s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025238037s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074922577s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526062012s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074905463s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526062012s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.d( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.574517965s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025741577s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.d( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.574501991s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025741577s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074715011s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526016235s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074700162s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526016235s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.c( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074623093s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526046753s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.c( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074608609s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526046753s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074914582s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526367188s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074894853s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526367188s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.8( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.574024677s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025527954s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.b( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.574028730s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025558472s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.b( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.574011803s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025558472s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.10( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074843310s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526458740s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074973784s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526596069s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.8( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.573991537s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025527954s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.10( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074826285s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526458740s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074951164s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526596069s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.728723526s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180450439s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.728413463s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180175781s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.728396297s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180175781s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074762642s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526580811s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.13( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074512661s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526306152s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.573996305s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025833130s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074735738s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526580811s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[2.13( empty local-lis/les=30/31 n=0 ec=24/13 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074463442s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526306152s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.573979855s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025833130s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.573808670s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025756836s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=26/29 n=0 ec=26/16 lis/c=26/26 les/c/f=29/29/0 sis=45 pruub=2.573786974s) [2] r=-1 lpr=45 pi=[26,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 161.025756836s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/15 lis/c=24/24 les/c/f=25/25/0 sis=45 pruub=0.728369653s) [2] r=-1 lpr=45 pi=[24,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 159.180450439s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074359894s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526580811s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:20 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=30/31 n=0 ec=27/18 lis/c=30/30 les/c/f=31/31/0 sis=45 pruub=0.074223943s) [2] r=-1 lpr=45 pi=[30,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.526580811s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:20 compute-0 podman[93506]: 2026-01-23 09:52:20.340092344 +0000 UTC m=+0.216963187 container create d0aceef02e83ead0f859349fc3627443b847d27b9a013e96deaa30b48f47dd34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:52:20 compute-0 systemd[1]: Started libpod-conmon-d0aceef02e83ead0f859349fc3627443b847d27b9a013e96deaa30b48f47dd34.scope.
Jan 23 09:52:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:52:20 compute-0 podman[93506]: 2026-01-23 09:52:20.41750212 +0000 UTC m=+0.294372993 container init d0aceef02e83ead0f859349fc3627443b847d27b9a013e96deaa30b48f47dd34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:52:20 compute-0 podman[93506]: 2026-01-23 09:52:20.423577486 +0000 UTC m=+0.300448329 container start d0aceef02e83ead0f859349fc3627443b847d27b9a013e96deaa30b48f47dd34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:52:20 compute-0 stupefied_haibt[93526]: 167 167
Jan 23 09:52:20 compute-0 systemd[1]: libpod-d0aceef02e83ead0f859349fc3627443b847d27b9a013e96deaa30b48f47dd34.scope: Deactivated successfully.
Jan 23 09:52:20 compute-0 sudo[93562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umcvoxwzeuwcjhroflmrpflvtaouccrj ; /usr/bin/python3'
Jan 23 09:52:20 compute-0 podman[93506]: 2026-01-23 09:52:20.498991608 +0000 UTC m=+0.375862491 container attach d0aceef02e83ead0f859349fc3627443b847d27b9a013e96deaa30b48f47dd34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 09:52:20 compute-0 podman[93506]: 2026-01-23 09:52:20.499706348 +0000 UTC m=+0.376577221 container died d0aceef02e83ead0f859349fc3627443b847d27b9a013e96deaa30b48f47dd34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:52:20 compute-0 sudo[93562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:52:20 compute-0 python3[93564]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:52:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 23 09:52:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6974b2764e4157e2a5dd95aa886caf4c5e9b63505ec63f6a598a36bef3e169a-merged.mount: Deactivated successfully.
Jan 23 09:52:20 compute-0 podman[93506]: 2026-01-23 09:52:20.957587689 +0000 UTC m=+0.834458532 container remove d0aceef02e83ead0f859349fc3627443b847d27b9a013e96deaa30b48f47dd34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:52:20 compute-0 systemd[1]: Reloading.
Jan 23 09:52:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v38: 195 pgs: 28 peering, 167 active+clean; 450 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 682 B/s wr, 1 op/s
Jan 23 09:52:21 compute-0 systemd-rc-local-generator[93608]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:52:21 compute-0 systemd-sysv-generator[93611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:52:21 compute-0 podman[93565]: 2026-01-23 09:52:21.037693119 +0000 UTC m=+0.354135239 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:52:21 compute-0 systemd[1]: libpod-conmon-d0aceef02e83ead0f859349fc3627443b847d27b9a013e96deaa30b48f47dd34.scope: Deactivated successfully.
Jan 23 09:52:21 compute-0 podman[93565]: 2026-01-23 09:52:21.307460141 +0000 UTC m=+0.623902241 container create 1ceb53dc2837424b9abdbeeec08195f58cf1c2594305e60369ce3d30b1901322 (image=quay.io/ceph/ceph:v19, name=peaceful_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 09:52:21 compute-0 systemd[1]: Reloading.
Jan 23 09:52:21 compute-0 systemd-rc-local-generator[93648]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:52:21 compute-0 systemd-sysv-generator[93651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:52:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 23 09:52:21 compute-0 ceph-mon[74335]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:52:21 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:21 compute-0 ceph-mon[74335]: pgmap v35: 195 pgs: 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Jan 23 09:52:21 compute-0 ceph-mon[74335]: osdmap e44: 3 total, 2 up, 3 in
Jan 23 09:52:21 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:21 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:21 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:21 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:21 compute-0 ceph-mon[74335]: Deploying daemon rgw.rgw.compute-0.jbpfwf on compute-0
Jan 23 09:52:21 compute-0 ceph-mon[74335]: osd.2 [v2:192.168.122.102:6800/1020282776,v1:192.168.122.102:6801/1020282776] boot
Jan 23 09:52:21 compute-0 ceph-mon[74335]: osdmap e45: 3 total, 3 up, 3 in
Jan 23 09:52:21 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:52:21 compute-0 systemd[1]: Started libpod-conmon-1ceb53dc2837424b9abdbeeec08195f58cf1c2594305e60369ce3d30b1901322.scope.
Jan 23 09:52:21 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.jbpfwf for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:52:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4907eba9f4bad028f6314753de28556f26fb0da5a6bf74773766facc8f9c3f62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4907eba9f4bad028f6314753de28556f26fb0da5a6bf74773766facc8f9c3f62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:22 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 23 09:52:22 compute-0 podman[93565]: 2026-01-23 09:52:22.143860595 +0000 UTC m=+1.460302705 container init 1ceb53dc2837424b9abdbeeec08195f58cf1c2594305e60369ce3d30b1901322 (image=quay.io/ceph/ceph:v19, name=peaceful_chandrasekhar, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:52:22 compute-0 podman[93565]: 2026-01-23 09:52:22.152918681 +0000 UTC m=+1.469360811 container start 1ceb53dc2837424b9abdbeeec08195f58cf1c2594305e60369ce3d30b1901322 (image=quay.io/ceph/ceph:v19, name=peaceful_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:52:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 23 09:52:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 23 09:52:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 23 09:52:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 23 09:52:22 compute-0 podman[93565]: 2026-01-23 09:52:22.274612693 +0000 UTC m=+1.591054793 container attach 1ceb53dc2837424b9abdbeeec08195f58cf1c2594305e60369ce3d30b1901322 (image=quay.io/ceph/ceph:v19, name=peaceful_chandrasekhar, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 09:52:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 23 09:52:22 compute-0 ceph-mon[74335]: pgmap v38: 195 pgs: 28 peering, 167 active+clean; 450 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 682 B/s wr, 1 op/s
Jan 23 09:52:22 compute-0 ceph-mon[74335]: osdmap e46: 3 total, 3 up, 3 in
Jan 23 09:52:22 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3935157835' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 23 09:52:22 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1572426654' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 23 09:52:22 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 23 09:52:22 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 23 09:52:22 compute-0 podman[93729]: 2026-01-23 09:52:22.483015305 +0000 UTC m=+0.046611840 container create 318a53d5c542e8639ae8cbe910fcc9d6c8a7c7006a978c4655d1d2582222973a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-rgw-rgw-compute-0-jbpfwf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 09:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6976f01a9e7e4f3455082effd4886627e2d584498d9eada66fd9906f9d72300c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6976f01a9e7e4f3455082effd4886627e2d584498d9eada66fd9906f9d72300c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6976f01a9e7e4f3455082effd4886627e2d584498d9eada66fd9906f9d72300c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6976f01a9e7e4f3455082effd4886627e2d584498d9eada66fd9906f9d72300c/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.jbpfwf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:22 compute-0 podman[93729]: 2026-01-23 09:52:22.540946531 +0000 UTC m=+0.104543086 container init 318a53d5c542e8639ae8cbe910fcc9d6c8a7c7006a978c4655d1d2582222973a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-rgw-rgw-compute-0-jbpfwf, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:52:22 compute-0 podman[93729]: 2026-01-23 09:52:22.548580038 +0000 UTC m=+0.112176573 container start 318a53d5c542e8639ae8cbe910fcc9d6c8a7c7006a978c4655d1d2582222973a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-rgw-rgw-compute-0-jbpfwf, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:52:22 compute-0 podman[93729]: 2026-01-23 09:52:22.459504085 +0000 UTC m=+0.023100640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:52:22 compute-0 bash[93729]: 318a53d5c542e8639ae8cbe910fcc9d6c8a7c7006a978c4655d1d2582222973a
Jan 23 09:52:22 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.jbpfwf for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:52:22 compute-0 sudo[93440]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:22 compute-0 radosgw[93748]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 23 09:52:22 compute-0 radosgw[93748]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Jan 23 09:52:22 compute-0 radosgw[93748]: framework: beast
Jan 23 09:52:22 compute-0 radosgw[93748]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 23 09:52:22 compute-0 radosgw[93748]: init_numa not setting numa affinity
Jan 23 09:52:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:52:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 23 09:52:22 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1010663506' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 23 09:52:22 compute-0 peaceful_chandrasekhar[93661]: 
Jan 23 09:52:22 compute-0 peaceful_chandrasekhar[93661]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":9}}
Jan 23 09:52:22 compute-0 systemd[1]: libpod-1ceb53dc2837424b9abdbeeec08195f58cf1c2594305e60369ce3d30b1901322.scope: Deactivated successfully.
Jan 23 09:52:22 compute-0 podman[93565]: 2026-01-23 09:52:22.672614894 +0000 UTC m=+1.989056994 container died 1ceb53dc2837424b9abdbeeec08195f58cf1c2594305e60369ce3d30b1901322 (image=quay.io/ceph/ceph:v19, name=peaceful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4907eba9f4bad028f6314753de28556f26fb0da5a6bf74773766facc8f9c3f62-merged.mount: Deactivated successfully.
Jan 23 09:52:22 compute-0 podman[93565]: 2026-01-23 09:52:22.725422931 +0000 UTC m=+2.041865031 container remove 1ceb53dc2837424b9abdbeeec08195f58cf1c2594305e60369ce3d30b1901322 (image=quay.io/ceph/ceph:v19, name=peaceful_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:52:22 compute-0 systemd[1]: libpod-conmon-1ceb53dc2837424b9abdbeeec08195f58cf1c2594305e60369ce3d30b1901322.scope: Deactivated successfully.
Jan 23 09:52:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 23 09:52:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 23 09:52:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 23 09:52:22 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 23 09:52:22 compute-0 sudo[93562]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:52:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v41: 196 pgs: 1 unknown, 28 peering, 167 active+clean; 450 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:52:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:52:23 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event b531fcdb-03a4-4cf0-afc0-fe04e1941305 (Global Recovery Event) in 5 seconds
Jan 23 09:52:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 23 09:52:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 09:52:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 23 09:52:24 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1010663506' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 23 09:52:24 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 23 09:52:24 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 23 09:52:24 compute-0 ceph-mon[74335]: osdmap e47: 3 total, 3 up, 3 in
Jan 23 09:52:24 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:24 compute-0 ceph-mon[74335]: pgmap v41: 196 pgs: 1 unknown, 28 peering, 167 active+clean; 450 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:52:24 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 23 09:52:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:24 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 4a82989e-3b7f-4370-8672-3e09753c7f87 (Updating rgw.rgw deployment (+3 -> 3))
Jan 23 09:52:24 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 4a82989e-3b7f-4370-8672-3e09753c7f87 (Updating rgw.rgw deployment (+3 -> 3)) in 19 seconds
Jan 23 09:52:24 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 09:52:24 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 09:52:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 09:52:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v43: 196 pgs: 1 creating+peering, 28 peering, 167 active+clean; 450 KiB data, 481 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 795 B/s wr, 6 op/s
Jan 23 09:52:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 23 09:52:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 23 09:52:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 23 09:52:25 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:25 compute-0 ceph-mon[74335]: osdmap e48: 3 total, 3 up, 3 in
Jan 23 09:52:25 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:25 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 23 09:52:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 23 09:52:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 09:52:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 23 09:52:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 09:52:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 23 09:52:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 09:52:26 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:26 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 4fb14608-a056-42ab-b32e-152a605da8e7 (Updating mds.cephfs deployment (+3 -> 3))
Jan 23 09:52:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.prgzmm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 23 09:52:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.prgzmm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 09:52:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.prgzmm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 09:52:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:26 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.prgzmm on compute-2
Jan 23 09:52:26 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.prgzmm on compute-2
Jan 23 09:52:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 23 09:52:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v45: 197 pgs: 1 unknown, 1 creating+peering, 195 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 744 B/s wr, 6 op/s
Jan 23 09:52:27 compute-0 ceph-mon[74335]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 09:52:27 compute-0 ceph-mon[74335]: pgmap v43: 196 pgs: 1 creating+peering, 28 peering, 167 active+clean; 450 KiB data, 481 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 795 B/s wr, 6 op/s
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:27 compute-0 ceph-mon[74335]: osdmap e49: 3 total, 3 up, 3 in
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3935157835' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1572426654' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.prgzmm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.prgzmm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 09:52:27 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 09:52:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 09:52:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 09:52:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 23 09:52:27 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 23 09:52:28 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:52:28 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:28 compute-0 ceph-mon[74335]: Deploying daemon mds.cephfs.compute-2.prgzmm on compute-2
Jan 23 09:52:28 compute-0 ceph-mon[74335]: pgmap v45: 197 pgs: 1 unknown, 1 creating+peering, 195 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 744 B/s wr, 6 op/s
Jan 23 09:52:28 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 09:52:28 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 09:52:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 09:52:28 compute-0 ceph-mon[74335]: osdmap e50: 3 total, 3 up, 3 in
Jan 23 09:52:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:52:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 23 09:52:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 23 09:52:28 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 23 09:52:28 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 13 completed events
Jan 23 09:52:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:52:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v48: 197 pgs: 197 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:52:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:29 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:52:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:52:29 compute-0 ceph-mon[74335]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:52:29 compute-0 ceph-mon[74335]: osdmap e51: 3 total, 3 up, 3 in
Jan 23 09:52:29 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:52:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 23 09:52:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 09:52:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 23 09:52:30 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 23 09:52:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 23 09:52:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 09:52:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 23 09:52:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 09:52:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 23 09:52:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 09:52:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ymknms", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 23 09:52:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ymknms", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 09:52:30 compute-0 ceph-mon[74335]: pgmap v48: 197 pgs: 197 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:52:30 compute-0 ceph-mon[74335]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:52:30 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:30 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:30 compute-0 ceph-mon[74335]: osdmap e52: 3 total, 3 up, 3 in
Jan 23 09:52:30 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3935157835' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 09:52:30 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1572426654' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 09:52:30 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 09:52:30 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 09:52:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ymknms", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 09:52:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:30 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ymknms on compute-0
Jan 23 09:52:30 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ymknms on compute-0
Jan 23 09:52:30 compute-0 sudo[94360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:52:30 compute-0 sudo[94360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:52:30 compute-0 sudo[94360]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:31 compute-0 sudo[94385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:52:31 compute-0 sudo[94385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:52:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v50: 198 pgs: 1 unknown, 197 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e3 new map
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-23T09:52:30:834166+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T09:51:34.000760+0000
                                           modified        2026-01-23T09:51:34.000760+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.prgzmm{-1:24193} state up:standby seq 1 addr [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] compat {c=[1],r=[1],i=[1fff]}]
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] up:boot
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] as mds.0
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.prgzmm assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 23 09:52:31 compute-0 ceph-mgr[74633]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.prgzmm v2:192.168.122.102:6804/1390112456; not ready for session (expect reconnect)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.prgzmm"} v 0)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.prgzmm"}]: dispatch
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e3 all = 0
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e4 new map
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-23T09:52:31:070018+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T09:51:34.000760+0000
                                           modified        2026-01-23T09:52:31.070004+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24193}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.prgzmm{0:24193} state up:creating seq 1 addr [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:creating}
Jan 23 09:52:31 compute-0 podman[94449]: 2026-01-23 09:52:31.42269819 +0000 UTC m=+0.046812555 container create 88558c64c0ce10de6d1679239d596826221adf72d25a7949cb80259bf37a2035 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 09:52:31 compute-0 systemd[1]: Started libpod-conmon-88558c64c0ce10de6d1679239d596826221adf72d25a7949cb80259bf37a2035.scope.
Jan 23 09:52:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:52:31 compute-0 podman[94449]: 2026-01-23 09:52:31.404630239 +0000 UTC m=+0.028744634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:52:31 compute-0 podman[94449]: 2026-01-23 09:52:31.50646554 +0000 UTC m=+0.130579925 container init 88558c64c0ce10de6d1679239d596826221adf72d25a7949cb80259bf37a2035 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mcnulty, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:52:31 compute-0 podman[94449]: 2026-01-23 09:52:31.515431814 +0000 UTC m=+0.139546179 container start 88558c64c0ce10de6d1679239d596826221adf72d25a7949cb80259bf37a2035 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.prgzmm is now active in filesystem cephfs as rank 0
Jan 23 09:52:31 compute-0 podman[94449]: 2026-01-23 09:52:31.519478624 +0000 UTC m=+0.143593049 container attach 88558c64c0ce10de6d1679239d596826221adf72d25a7949cb80259bf37a2035 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 09:52:31 compute-0 musing_mcnulty[94466]: 167 167
Jan 23 09:52:31 compute-0 systemd[1]: libpod-88558c64c0ce10de6d1679239d596826221adf72d25a7949cb80259bf37a2035.scope: Deactivated successfully.
Jan 23 09:52:31 compute-0 podman[94449]: 2026-01-23 09:52:31.522123966 +0000 UTC m=+0.146238331 container died 88558c64c0ce10de6d1679239d596826221adf72d25a7949cb80259bf37a2035 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mcnulty, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 09:52:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b2871da4cbdba01c083f96cd6737010b924f6ca52fff88c2b8bea759197462f-merged.mount: Deactivated successfully.
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 23 09:52:31 compute-0 podman[94449]: 2026-01-23 09:52:31.604530769 +0000 UTC m=+0.228645134 container remove 88558c64c0ce10de6d1679239d596826221adf72d25a7949cb80259bf37a2035 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 09:52:31 compute-0 systemd[1]: libpod-conmon-88558c64c0ce10de6d1679239d596826221adf72d25a7949cb80259bf37a2035.scope: Deactivated successfully.
Jan 23 09:52:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 09:52:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:52:32 compute-0 systemd[1]: Reloading.
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ymknms", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ymknms", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:32 compute-0 ceph-mon[74335]: Deploying daemon mds.cephfs.compute-0.ymknms on compute-0
Jan 23 09:52:32 compute-0 ceph-mon[74335]: pgmap v50: 198 pgs: 1 unknown, 197 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:52:32 compute-0 ceph-mon[74335]: mds.? [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] up:boot
Jan 23 09:52:32 compute-0 ceph-mon[74335]: daemon mds.cephfs.compute-2.prgzmm assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 23 09:52:32 compute-0 ceph-mon[74335]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 23 09:52:32 compute-0 ceph-mon[74335]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 23 09:52:32 compute-0 ceph-mon[74335]: Cluster is now healthy
Jan 23 09:52:32 compute-0 ceph-mon[74335]: fsmap cephfs:0 1 up:standby
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.prgzmm"}]: dispatch
Jan 23 09:52:32 compute-0 ceph-mon[74335]: fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:creating}
Jan 23 09:52:32 compute-0 ceph-mon[74335]: daemon mds.cephfs.compute-2.prgzmm is now active in filesystem cephfs as rank 0
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 09:52:32 compute-0 ceph-mon[74335]: osdmap e53: 3 total, 3 up, 3 in
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3935157835' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1572426654' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 09:52:32 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 09:52:32 compute-0 systemd-rc-local-generator[94510]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:52:32 compute-0 systemd-sysv-generator[94513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:52:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 23 09:52:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e5 new map
Jan 23 09:52:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-23T09:52:32:417167+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T09:51:34.000760+0000
                                           modified        2026-01-23T09:52:32.417165+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24193}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24193 members: 24193
                                           [mds.cephfs.compute-2.prgzmm{0:24193} state up:active seq 2 addr [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 23 09:52:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] up:active
Jan 23 09:52:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active}
Jan 23 09:52:32 compute-0 systemd[1]: Reloading.
Jan 23 09:52:32 compute-0 systemd-sysv-generator[94554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:52:32 compute-0 systemd-rc-local-generator[94550]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:52:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 09:52:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 09:52:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 09:52:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 23 09:52:32 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 23 09:52:32 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.ymknms for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:52:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:52:32
Jan 23 09:52:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:52:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Some PGs (0.005051) are unknown; try again later
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 32)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 09:52:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:52:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v53: 198 pgs: 1 unknown, 197 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:52:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:52:33 compute-0 podman[94608]: 2026-01-23 09:52:33.184142529 +0000 UTC m=+0.048829000 container create e4542c7adce0f518b7f99d99679470337d13a224c21a637a7c2e54819c64d093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mds-cephfs-compute-0-ymknms, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 23 09:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2790d82b840c49fa2382f0302c8c48615986546c1a8b085b001f0948f4ffe473/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2790d82b840c49fa2382f0302c8c48615986546c1a8b085b001f0948f4ffe473/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2790d82b840c49fa2382f0302c8c48615986546c1a8b085b001f0948f4ffe473/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2790d82b840c49fa2382f0302c8c48615986546c1a8b085b001f0948f4ffe473/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ymknms supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:33 compute-0 podman[94608]: 2026-01-23 09:52:33.255633954 +0000 UTC m=+0.120320435 container init e4542c7adce0f518b7f99d99679470337d13a224c21a637a7c2e54819c64d093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mds-cephfs-compute-0-ymknms, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 09:52:33 compute-0 podman[94608]: 2026-01-23 09:52:33.262598733 +0000 UTC m=+0.127285194 container start e4542c7adce0f518b7f99d99679470337d13a224c21a637a7c2e54819c64d093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mds-cephfs-compute-0-ymknms, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:52:33 compute-0 podman[94608]: 2026-01-23 09:52:33.167174467 +0000 UTC m=+0.031860948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:52:33 compute-0 bash[94608]: e4542c7adce0f518b7f99d99679470337d13a224c21a637a7c2e54819c64d093
Jan 23 09:52:33 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.ymknms for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:52:33 compute-0 ceph-mds[94628]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 09:52:33 compute-0 ceph-mds[94628]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Jan 23 09:52:33 compute-0 ceph-mds[94628]: main not setting numa affinity
Jan 23 09:52:33 compute-0 ceph-mds[94628]: pidfile_write: ignore empty --pid-file
Jan 23 09:52:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mds-cephfs-compute-0-ymknms[94624]: starting mds.cephfs.compute-0.ymknms at 
Jan 23 09:52:33 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Updating MDS map to version 5 from mon.0
Jan 23 09:52:33 compute-0 sudo[94385]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:52:33 compute-0 ceph-mon[74335]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 09:52:33 compute-0 ceph-mon[74335]: mds.? [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] up:active
Jan 23 09:52:33 compute-0 ceph-mon[74335]: fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active}
Jan 23 09:52:33 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/681468082' entity='client.rgw.rgw.compute-0.jbpfwf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 09:52:33 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-2.yzflfx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 09:52:33 compute-0 ceph-mon[74335]: from='client.? ' entity='client.rgw.rgw.compute-1.syfcuk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 09:52:33 compute-0 ceph-mon[74335]: osdmap e54: 3 total, 3 up, 3 in
Jan 23 09:52:33 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:52:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:52:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 23 09:52:33 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:52:33 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 23 09:52:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e6 new map
Jan 23 09:52:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2026-01-23T09:52:33:487599+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T09:51:34.000760+0000
                                           modified        2026-01-23T09:52:32.417165+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24193}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24193 members: 24193
                                           [mds.cephfs.compute-2.prgzmm{0:24193} state up:active seq 2 addr [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ymknms{-1:14502} state up:standby seq 1 addr [v2:192.168.122.100:6806/3718923574,v1:192.168.122.100:6807/3718923574] compat {c=[1],r=[1],i=[1fff]}]
Jan 23 09:52:33 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Updating MDS map to version 6 from mon.0
Jan 23 09:52:33 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Monitors have assigned me to become a standby
Jan 23 09:52:33 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3718923574,v1:192.168.122.100:6807/3718923574] up:boot
Jan 23 09:52:33 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 1 up:standby
Jan 23 09:52:34 compute-0 radosgw[93748]: v1 topic migration: starting v1 topic migration..
Jan 23 09:52:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-rgw-rgw-compute-0-jbpfwf[93744]: 2026-01-23T09:52:34.036+0000 7fa6f6b78980 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 23 09:52:34 compute-0 radosgw[93748]: LDAP not started since no server URIs were provided in the configuration.
Jan 23 09:52:34 compute-0 radosgw[93748]: v1 topic migration: finished v1 topic migration
Jan 23 09:52:34 compute-0 radosgw[93748]: framework: beast
Jan 23 09:52:34 compute-0 radosgw[93748]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 23 09:52:34 compute-0 radosgw[93748]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 23 09:52:34 compute-0 ceph-mgr[74633]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 23 09:52:34 compute-0 radosgw[93748]: starting handler: beast
Jan 23 09:52:34 compute-0 radosgw[93748]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 09:52:34 compute-0 radosgw[93748]: mgrc service_daemon_register rgw.14496 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.jbpfwf,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=75d0a494-c738-4cca-b87e-be71cfd0ed45,zone_name=default,zonegroup_id=6635d7c3-d02c-4c4b-90b3-4ee042e293d6,zonegroup_name=default}
Jan 23 09:52:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ymknms"} v 0)
Jan 23 09:52:34 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ymknms"}]: dispatch
Jan 23 09:52:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e6 all = 0
Jan 23 09:52:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 23 09:52:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:34 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 23 09:52:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 09:52:34 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev d1366bfb-af60-4c57-be3a-1bf4305f011b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 23 09:52:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:52:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:34 compute-0 ceph-mon[74335]: pgmap v53: 198 pgs: 1 unknown, 197 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:52:34 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:34 compute-0 ceph-mon[74335]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 09:52:34 compute-0 ceph-mon[74335]: Cluster is now healthy
Jan 23 09:52:34 compute-0 ceph-mon[74335]: mds.? [v2:192.168.122.100:6806/3718923574,v1:192.168.122.100:6807/3718923574] up:boot
Jan 23 09:52:34 compute-0 ceph-mon[74335]: fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 1 up:standby
Jan 23 09:52:34 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ymknms"}]: dispatch
Jan 23 09:52:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v55: 198 pgs: 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 11 KiB/s wr, 41 op/s
Jan 23 09:52:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:52:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bcvzvj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 23 09:52:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bcvzvj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 09:52:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bcvzvj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 09:52:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:35 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.bcvzvj on compute-1
Jan 23 09:52:35 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.bcvzvj on compute-1
Jan 23 09:52:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 23 09:52:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 23 09:52:36 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 23 09:52:36 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 13448a83-fada-485e-983a-ef34957422c7 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 23 09:52:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:52:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:36 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 56 pg[8.0( v 37'1 (0'0,37'1] local-lis/les=36/37 n=1 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=56 pruub=13.835588455s) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 188.382827759s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:36 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 56 pg[8.0( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=56 pruub=13.835588455s) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown pruub 188.382827759s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:36 compute-0 ceph-mon[74335]: osdmap e55: 3 total, 3 up, 3 in
Jan 23 09:52:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:36 compute-0 ceph-mon[74335]: pgmap v55: 198 pgs: 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 11 KiB/s wr, 41 op/s
Jan 23 09:52:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bcvzvj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 09:52:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bcvzvj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 09:52:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:36 compute-0 ceph-mon[74335]: Deploying daemon mds.cephfs.compute-1.bcvzvj on compute-1
Jan 23 09:52:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v57: 229 pgs: 31 unknown, 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.8 KiB/s wr, 34 op/s
Jan 23 09:52:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:52:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 23 09:52:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 23 09:52:37 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 23 09:52:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:52:37 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 06df48ca-a1c7-443f-a4ac-2d26ff78d5c7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 23 09:52:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:52:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.14( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.16( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.15( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.17( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.10( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.11( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.2( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.3( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.f( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.8( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.9( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.a( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.e( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.d( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.c( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.b( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1( v 37'1 (0'0,37'1] local-lis/les=36/37 n=1 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.7( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[9.0( v 44'12 (0'0,44'12] local-lis/les=42/43 n=6 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=11.403837204s) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 44'11 mlcod 44'11 active pruub 186.987579346s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.6( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.5( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.4( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1b( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1a( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.19( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.18( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1f( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1e( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1d( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1c( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.13( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.12( v 37'1 lc 0'0 (0'0,37'1] local-lis/les=36/37 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[9.0( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=11.403837204s) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 44'11 mlcod 0'0 unknown pruub 186.987579346s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:37 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55c0a8590900) operator()   moving buffer(0x55c0a738b7e8 space 0x55c0a7021940 0x0~1000 clean)
Jan 23 09:52:37 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55c0a8590900) operator()   moving buffer(0x55c0a738a7a8 space 0x55c0a661cc40 0x0~1000 clean)
Jan 23 09:52:37 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55c0a8590900) operator()   moving buffer(0x55c0a738a2a8 space 0x55c0a71c3a10 0x0~1000 clean)
Jan 23 09:52:37 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55c0a8590900) operator()   moving buffer(0x55c0a738a3e8 space 0x55c0a73956d0 0x0~1000 clean)
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.16( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.14( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.3( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.2( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.f( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.17( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.10( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.15( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.11( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.8( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.a( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.e( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.c( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.d( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.b( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.0( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1( v 37'1 (0'0,37'1] local-lis/les=56/57 n=1 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.9( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.6( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.5( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.7( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.4( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1b( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.19( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1a( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1f( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.18( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1e( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1c( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.13( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.12( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 57 pg[8.1d( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=36/36 les/c/f=37/37/0 sis=56) [1] r=0 lpr=56 pi=[36,56)/1 crt=37'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 09:52:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:37 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 4fb14608-a056-42ab-b32e-152a605da8e7 (Updating mds.cephfs deployment (+3 -> 3))
Jan 23 09:52:37 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 4fb14608-a056-42ab-b32e-152a605da8e7 (Updating mds.cephfs deployment (+3 -> 3)) in 12 seconds
Jan 23 09:52:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 23 09:52:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:37 compute-0 ceph-mon[74335]: osdmap e56: 3 total, 3 up, 3 in
Jan 23 09:52:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:37 compute-0 ceph-mon[74335]: pgmap v57: 229 pgs: 31 unknown, 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.8 KiB/s wr, 34 op/s
Jan 23 09:52:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:37 compute-0 ceph-mon[74335]: osdmap e57: 3 total, 3 up, 3 in
Jan 23 09:52:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:37 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:38 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 23 09:52:38 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:38 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 9e0d73cd-767d-4d91-90b4-c0139b77151f (Updating nfs.cephfs deployment (+3 -> 3))
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 23 09:52:38 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev fdd7c963-8800-4877-b2d4-8fab20dead7a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.15( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.14( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.16( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.17( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.11( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.10( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.3( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.2( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.e( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.8( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.9( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.b( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.f( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.a( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.d( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.c( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.6( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1( v 44'12 (0'0,44'12] local-lis/les=42/43 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.7( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.4( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1a( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1b( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.18( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.19( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.5( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1f( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1e( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1c( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1d( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.12( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.13( v 44'12 lc 0'0 (0'0,44'12] local-lis/les=42/43 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.15( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.bawllm
Jan 23 09:52:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.bawllm
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.14( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.16( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.10( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.11( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.2( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.e( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.3( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.17( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.8( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.b( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.d( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.f( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.0( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 44'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.a( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.c( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.6( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.7( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1a( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1b( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.18( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.9( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1f( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.19( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.5( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.12( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.4( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1d( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.13( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1e( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 58 pg[9.1c( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=44'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e7 new map
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2026-01-23T09:52:38:529421+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T09:51:34.000760+0000
                                           modified        2026-01-23T09:52:32.417165+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24193}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24193 members: 24193
                                           [mds.cephfs.compute-2.prgzmm{0:24193} state up:active seq 2 addr [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ymknms{-1:14502} state up:standby seq 1 addr [v2:192.168.122.100:6806/3718923574,v1:192.168.122.100:6807/3718923574] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.bcvzvj{-1:24200} state up:standby seq 1 addr [v2:192.168.122.101:6804/2199615937,v1:192.168.122.101:6805/2199615937] compat {c=[1],r=[1],i=[1fff]}]
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2199615937,v1:192.168.122.101:6805/2199615937] up:boot
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 2 up:standby
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.bcvzvj"} v 0)
Jan 23 09:52:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bcvzvj"}]: dispatch
Jan 23 09:52:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e7 all = 0
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 23 09:52:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v60: 260 pgs: 260 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 0 B/s wr, 364 op/s
Jan 23 09:52:39 compute-0 ceph-mgr[74633]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 23 09:52:39 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 23 09:52:39 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 14 completed events
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:52:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:39 compute-0 ceph-mon[74335]: 8.16 scrub starts
Jan 23 09:52:39 compute-0 ceph-mon[74335]: 8.16 scrub ok
Jan 23 09:52:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:39 compute-0 ceph-mon[74335]: osdmap e58: 3 total, 3 up, 3 in
Jan 23 09:52:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 09:52:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mds.? [v2:192.168.122.101:6804/2199615937,v1:192.168.122.101:6805/2199615937] up:boot
Jan 23 09:52:39 compute-0 ceph-mon[74335]: fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 2 up:standby
Jan 23 09:52:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bcvzvj"}]: dispatch
Jan 23 09:52:39 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 23 09:52:39 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:39 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event c701cab4-472b-4d02-b0ee-6ad4628dfc38 (Global Recovery Event) in 6 seconds
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 23 09:52:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 23 09:52:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 23 09:52:40 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 4a69600e-0a83-42f1-adbc-5613a8c900f8 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev d1366bfb-af60-4c57-be3a-1bf4305f011b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event d1366bfb-af60-4c57-be3a-1bf4305f011b (PG autoscaler increasing pool 8 PGs from 1 to 32) in 5 seconds
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 13448a83-fada-485e-983a-ef34957422c7 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 13448a83-fada-485e-983a-ef34957422c7 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 4 seconds
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 06df48ca-a1c7-443f-a4ac-2d26ff78d5c7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 06df48ca-a1c7-443f-a4ac-2d26ff78d5c7 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 3 seconds
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev fdd7c963-8800-4877-b2d4-8fab20dead7a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event fdd7c963-8800-4877-b2d4-8fab20dead7a (PG autoscaler increasing pool 11 PGs from 1 to 32) in 2 seconds
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 4a69600e-0a83-42f1-adbc-5613a8c900f8 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 23 09:52:40 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 4a69600e-0a83-42f1-adbc-5613a8c900f8 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.15( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.313900948s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.650726318s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.15( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.313842773s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.650726318s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.16( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.266151428s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.603240967s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.17( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325988770s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663146973s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.16( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.266090393s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.603240967s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.15( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315876961s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653137207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.17( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325836182s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663146973s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.14( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315425873s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.652770996s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.14( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315359116s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.652770996s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.16( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325227737s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.662734985s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.17( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315299034s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.652816772s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.16( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325207710s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.662734985s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.17( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315280914s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.652816772s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.11( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325235367s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.662841797s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.11( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325222015s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.662841797s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.10( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325026512s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.662734985s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.10( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325010300s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.662734985s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.11( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315403938s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653152466s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.10( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315281868s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653076172s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.11( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315385818s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653152466s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.3( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325325012s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663131714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.10( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315260887s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653076172s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.3( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325305939s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663131714s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.2( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315007210s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.652923584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.3( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314883232s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.652832031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.2( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314989090s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.652923584s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.3( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314865112s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.652832031s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.e( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325107574s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663116455s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.15( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315198898s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653137207s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.e( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325079918s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663116455s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[11.0( v 51'48 (0'0,51'48] local-lis/les=49/50 n=8 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=12.034888268s) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 51'47 mlcod 51'47 active pruub 190.373016357s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.9( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325384140s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663558960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.8( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314977646s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653167725s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.9( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.325366974s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663558960s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.8( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314958572s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653167725s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.8( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324860573s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663146973s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.8( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324843407s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663146973s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.9( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315017700s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653366089s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.f( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314612389s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.652984619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.9( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.315001488s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653366089s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.f( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314591408s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.652984619s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.b( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324739456s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663192749s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.b( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324722290s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663192749s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.a( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314684868s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653198242s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.f( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324685097s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663223267s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.a( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314665794s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653198242s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.f( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324666977s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663223267s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.d( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314544678s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653259277s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.c( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314515114s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653244019s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.d( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314523697s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653259277s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.c( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314498901s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653244019s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.d( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324384689s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663223267s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.d( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324366570s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663223267s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.a( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324285507s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663238525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.a( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324266434s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663238525s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.b( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314297676s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653274536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.b( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314275742s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653274536s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.6( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324041367s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663284302s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.6( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324026108s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663284302s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.6( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314086914s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653427124s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.7( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324006081s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663391113s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.6( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314068794s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653427124s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.5( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314073563s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653472900s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.7( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.323985100s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663391113s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.5( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314057350s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653472900s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.5( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324107170s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663650513s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.5( v 44'12 (0'0,44'12] local-lis/les=57/58 n=1 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.324091911s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663650513s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.4( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.314017296s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653594971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.4( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313957214s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653594971s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.1b( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313889503s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653594971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.1b( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313868523s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653594971s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.18( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.323733330s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663528442s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.19( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313874245s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653686523s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.18( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.323718071s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663528442s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.19( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313853264s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653686523s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.18( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313842773s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653778076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.1f( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313875198s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653823853s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.1f( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313861847s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653823853s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.18( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313824654s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653778076s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.1d( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.323594093s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663787842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.12( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.323422432s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663665771s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.1c( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313720703s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.653991699s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.12( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.323403358s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663665771s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.13( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.323491096s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 active pruub 192.663787842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.1d( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.323572159s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663787842s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.1c( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313697815s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.653991699s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[9.13( v 44'12 (0'0,44'12] local-lis/les=57/58 n=0 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=14.323475838s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=44'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.663787842s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.12( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313665390s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 active pruub 191.654098511s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[8.12( v 37'1 (0'0,37'1] local-lis/les=56/57 n=0 ec=56/36 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.313641548s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=37'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.654098511s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 59 pg[11.0( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=12.034888268s) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 51'47 mlcod 0'0 unknown pruub 190.373016357s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:40 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 23 09:52:40 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 23 09:52:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: Creating key for client.nfs.cephfs.0.0.compute-1.bawllm
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: pgmap v60: 260 pgs: 260 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 0 B/s wr, 364 op/s
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:40 compute-0 ceph-mon[74335]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 23 09:52:40 compute-0 ceph-mon[74335]: 8.14 scrub starts
Jan 23 09:52:40 compute-0 ceph-mon[74335]: 8.14 scrub ok
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:52:40 compute-0 ceph-mon[74335]: osdmap e59: 3 total, 3 up, 3 in
Jan 23 09:52:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e8 new map
Jan 23 09:52:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2026-01-23T09:52:40:798611+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T09:51:34.000760+0000
                                           modified        2026-01-23T09:52:39.805778+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24193}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24193 members: 24193
                                           [mds.cephfs.compute-2.prgzmm{0:24193} state up:active seq 4 join_fscid=1 addr [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ymknms{-1:14502} state up:standby seq 1 addr [v2:192.168.122.100:6806/3718923574,v1:192.168.122.100:6807/3718923574] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.bcvzvj{-1:24200} state up:standby seq 1 addr [v2:192.168.122.101:6804/2199615937,v1:192.168.122.101:6805/2199615937] compat {c=[1],r=[1],i=[1fff]}]
Jan 23 09:52:40 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] up:active
Jan 23 09:52:40 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 2 up:standby
Jan 23 09:52:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 23 09:52:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v62: 322 pgs: 62 unknown, 260 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 0 B/s wr, 347 op/s
Jan 23 09:52:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 23 09:52:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 23 09:52:41 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.17( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.16( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.15( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.14( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.13( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.12( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1( v 51'48 (0'0,51'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.c( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.b( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.a( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.9( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.e( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.f( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.8( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.2( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.3( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.d( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.4( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.5( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.6( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.7( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.18( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.19( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1b( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1a( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1c( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1d( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1e( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1f( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.11( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.10( v 51'48 lc 0'0 (0'0,51'48] local-lis/les=49/50 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.16( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.15( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.14( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.17( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.12( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.13( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.c( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.0( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 51'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.b( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.a( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.9( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.e( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.f( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.8( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.2( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.d( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.4( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.3( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.5( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.6( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.7( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.18( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.19( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1d( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1e( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1f( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1b( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1c( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.11( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.1a( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 60 pg[11.10( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=51'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:41 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 23 09:52:41 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 23 09:52:41 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.bawllm-rgw
Jan 23 09:52:41 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.bawllm-rgw
Jan 23 09:52:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 23 09:52:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:41 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 23 09:52:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:41 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 23 09:52:41 compute-0 ceph-mgr[74633]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.bawllm's ganesha conf is defaulting to empty
Jan 23 09:52:41 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.bawllm's ganesha conf is defaulting to empty
Jan 23 09:52:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:41 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:41 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.bawllm on compute-1
Jan 23 09:52:41 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.bawllm on compute-1
Jan 23 09:52:41 compute-0 ceph-mon[74335]: 9.14 scrub starts
Jan 23 09:52:41 compute-0 ceph-mon[74335]: 9.14 scrub ok
Jan 23 09:52:41 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 23 09:52:41 compute-0 ceph-mon[74335]: mds.? [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] up:active
Jan 23 09:52:41 compute-0 ceph-mon[74335]: fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 2 up:standby
Jan 23 09:52:41 compute-0 ceph-mon[74335]: pgmap v62: 322 pgs: 62 unknown, 260 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 0 B/s wr, 347 op/s
Jan 23 09:52:41 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:41 compute-0 ceph-mon[74335]: osdmap e60: 3 total, 3 up, 3 in
Jan 23 09:52:41 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:41 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bawllm-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:41 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 23 09:52:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e9 new map
Jan 23 09:52:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2026-01-23T09:52:42:200523+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-23T09:51:34.000760+0000
                                           modified        2026-01-23T09:52:39.805778+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24193}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24193 members: 24193
                                           [mds.cephfs.compute-2.prgzmm{0:24193} state up:active seq 4 join_fscid=1 addr [v2:192.168.122.102:6804/1390112456,v1:192.168.122.102:6805/1390112456] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ymknms{-1:14502} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.100:6806/3718923574,v1:192.168.122.100:6807/3718923574] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.bcvzvj{-1:24200} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2199615937,v1:192.168.122.101:6805/2199615937] compat {c=[1],r=[1],i=[1fff]}]
Jan 23 09:52:42 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:42 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Updating MDS map to version 9 from mon.0
Jan 23 09:52:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 23 09:52:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 23 09:52:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3718923574,v1:192.168.122.100:6807/3718923574] up:standby
Jan 23 09:52:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2199615937,v1:192.168.122.101:6805/2199615937] up:standby
Jan 23 09:52:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 2 up:standby
Jan 23 09:52:43 compute-0 ceph-mon[74335]: Rados config object exists: conf-nfs.cephfs
Jan 23 09:52:43 compute-0 ceph-mon[74335]: Creating key for client.nfs.cephfs.0.0.compute-1.bawllm-rgw
Jan 23 09:52:43 compute-0 ceph-mon[74335]: 9.2 scrub starts
Jan 23 09:52:43 compute-0 ceph-mon[74335]: 9.2 scrub ok
Jan 23 09:52:43 compute-0 ceph-mon[74335]: Bind address in nfs.cephfs.0.0.compute-1.bawllm's ganesha conf is defaulting to empty
Jan 23 09:52:43 compute-0 ceph-mon[74335]: Deploying daemon nfs.cephfs.0.0.compute-1.bawllm on compute-1
Jan 23 09:52:43 compute-0 ceph-mon[74335]: 9.16 scrub starts
Jan 23 09:52:43 compute-0 ceph-mon[74335]: 9.16 scrub ok
Jan 23 09:52:43 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 09:52:43 compute-0 ceph-mon[74335]: osdmap e61: 3 total, 3 up, 3 in
Jan 23 09:52:43 compute-0 ceph-mon[74335]: mds.? [v2:192.168.122.100:6806/3718923574,v1:192.168.122.100:6807/3718923574] up:standby
Jan 23 09:52:43 compute-0 ceph-mon[74335]: mds.? [v2:192.168.122.101:6804/2199615937,v1:192.168.122.101:6805/2199615937] up:standby
Jan 23 09:52:43 compute-0 ceph-mon[74335]: fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 2 up:standby
Jan 23 09:52:43 compute-0 ceph-mon[74335]: 9.a scrub starts
Jan 23 09:52:43 compute-0 ceph-mon[74335]: 9.a scrub ok
Jan 23 09:52:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 93 unknown, 260 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 211 KiB/s rd, 0 B/s wr, 356 op/s
Jan 23 09:52:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 23 09:52:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 23 09:52:43 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 23 09:52:44 compute-0 ceph-mon[74335]: 9.17 scrub starts
Jan 23 09:52:44 compute-0 ceph-mon[74335]: 9.17 scrub ok
Jan 23 09:52:44 compute-0 ceph-mon[74335]: pgmap v65: 353 pgs: 93 unknown, 260 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 211 KiB/s rd, 0 B/s wr, 356 op/s
Jan 23 09:52:44 compute-0 ceph-mon[74335]: 9.6 deep-scrub starts
Jan 23 09:52:44 compute-0 ceph-mon[74335]: 9.6 deep-scrub ok
Jan 23 09:52:44 compute-0 ceph-mon[74335]: osdmap e62: 3 total, 3 up, 3 in
Jan 23 09:52:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:52:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:52:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:52:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:44 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.tykohi
Jan 23 09:52:44 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.tykohi
Jan 23 09:52:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 23 09:52:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 23 09:52:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 23 09:52:44 compute-0 ceph-mgr[74633]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 23 09:52:44 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 23 09:52:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 23 09:52:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 23 09:52:44 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 20 completed events
Jan 23 09:52:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:52:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 23 09:52:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:44 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:44 compute-0 ceph-mgr[74633]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Jan 23 09:52:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 57 op/s; 105 B/s, 0 objects/s recovering
Jan 23 09:52:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:52:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 23 09:52:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 23 09:52:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:52:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:45 compute-0 ceph-mon[74335]: 8.3 scrub starts
Jan 23 09:52:45 compute-0 ceph-mon[74335]: 8.3 scrub ok
Jan 23 09:52:45 compute-0 ceph-mon[74335]: 9.11 scrub starts
Jan 23 09:52:45 compute-0 ceph-mon[74335]: 9.11 scrub ok
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 23 09:52:45 compute-0 ceph-mon[74335]: 8.15 scrub starts
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 23 09:52:45 compute-0 ceph-mon[74335]: 8.15 scrub ok
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 23 09:52:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:52:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 23 09:52:46 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:52:46 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 23 09:52:46 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:52:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 23 09:52:46 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.19( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.1c( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.8( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.a( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.e( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.c( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.b( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.6( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.12( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[12.10( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.17( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429490089s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.335311890s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.16( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.422159195s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.328079224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.16( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.422135353s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.328079224s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.17( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429438591s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.335311890s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.14( v 60'51 (0'0,60'51] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429372787s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 60'50 mlcod 60'50 active pruub 195.335357666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.14( v 60'51 (0'0,60'51] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429289818s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 60'50 mlcod 0'0 unknown NOTIFY pruub 195.335357666s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.13( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429712296s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.335845947s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.13( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429691315s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.335845947s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.12( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429506302s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.335739136s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429526329s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.335922241s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.12( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429485321s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.335739136s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429474831s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.335922241s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.a( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429621696s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.336410522s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.a( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429600716s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.336410522s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.e( v 60'51 (0'0,60'51] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429643631s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 60'50 mlcod 60'50 active pruub 195.336486816s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.f( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429663658s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.336532593s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.f( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429643631s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.336532593s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.e( v 60'51 (0'0,60'51] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429563522s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 60'50 mlcod 0'0 unknown NOTIFY pruub 195.336486816s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.8( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429533005s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.336563110s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.8( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429511070s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.336563110s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.3( v 60'51 (0'0,60'51] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429544449s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 60'50 mlcod 60'50 active pruub 195.336791992s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.3( v 60'51 (0'0,60'51] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429517746s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 60'50 mlcod 0'0 unknown NOTIFY pruub 195.336791992s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.4( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429448128s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.336746216s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.4( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429431915s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.336746216s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.5( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429497719s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.336853027s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.5( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429480553s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.336853027s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.7( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429478645s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.336944580s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.7( v 51'48 (0'0,51'48] local-lis/les=59/60 n=1 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429463387s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.336944580s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.19( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429383278s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.337051392s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1a( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429552078s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.337280273s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1a( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429531097s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.337280273s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1c( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429369926s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.337142944s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.19( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429363251s) [2] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.337051392s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1c( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429355621s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.337142944s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1d( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429265976s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.337142944s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1d( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.429248810s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.337142944s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1e( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.428946495s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.337158203s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1e( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.428930283s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.337158203s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1b( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.428934097s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 active pruub 195.337234497s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:46 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 63 pg[11.1b( v 51'48 (0'0,51'48] local-lis/les=59/60 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=10.428916931s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=51'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.337234497s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:46 compute-0 ceph-mon[74335]: Creating key for client.nfs.cephfs.1.0.compute-2.tykohi
Jan 23 09:52:46 compute-0 ceph-mon[74335]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 23 09:52:46 compute-0 ceph-mon[74335]: pgmap v67: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 57 op/s; 105 B/s, 0 objects/s recovering
Jan 23 09:52:46 compute-0 ceph-mon[74335]: 9.f scrub starts
Jan 23 09:52:46 compute-0 ceph-mon[74335]: 9.f scrub ok
Jan 23 09:52:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 49 op/s; 260 B/s, 1 objects/s recovering
Jan 23 09:52:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 23 09:52:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 23 09:52:47 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 23 09:52:47 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 23 09:52:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 23 09:52:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 23 09:52:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 23 09:52:47 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.10( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.c( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.6( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.12( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.b( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.e( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.a( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.8( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.1c( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[12.19( empty local-lis/les=63/64 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=63) [1] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[10.a( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:47 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 64 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 23 09:52:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 23 09:52:47 compute-0 ceph-mon[74335]: 9.13 deep-scrub starts
Jan 23 09:52:47 compute-0 ceph-mon[74335]: 9.13 deep-scrub ok
Jan 23 09:52:47 compute-0 ceph-mon[74335]: 8.10 deep-scrub starts
Jan 23 09:52:47 compute-0 ceph-mon[74335]: 8.10 deep-scrub ok
Jan 23 09:52:47 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:52:47 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 23 09:52:47 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:52:47 compute-0 ceph-mon[74335]: osdmap e63: 3 total, 3 up, 3 in
Jan 23 09:52:47 compute-0 ceph-mon[74335]: 9.9 scrub starts
Jan 23 09:52:47 compute-0 ceph-mon[74335]: 9.9 scrub ok
Jan 23 09:52:47 compute-0 ceph-mon[74335]: pgmap v69: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 49 op/s; 260 B/s, 1 objects/s recovering
Jan 23 09:52:47 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 23 09:52:47 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 23 09:52:47 compute-0 ceph-mon[74335]: osdmap e64: 3 total, 3 up, 3 in
Jan 23 09:52:48 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 23 09:52:48 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 23 09:52:48 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 23 09:52:48 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.tykohi-rgw
Jan 23 09:52:48 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.tykohi-rgw
Jan 23 09:52:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 23 09:52:48 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:48 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:48 compute-0 ceph-mgr[74633]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.tykohi's ganesha conf is defaulting to empty
Jan 23 09:52:48 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.tykohi's ganesha conf is defaulting to empty
Jan 23 09:52:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:48 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.tykohi on compute-2
Jan 23 09:52:48 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.tykohi on compute-2
Jan 23 09:52:48 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 23 09:52:48 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 23 09:52:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:52:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 23 09:52:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 23 09:52:48 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.a( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.a( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:48 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 65 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:52:48 compute-0 ceph-mon[74335]: 11.15 scrub starts
Jan 23 09:52:48 compute-0 ceph-mon[74335]: 11.15 scrub ok
Jan 23 09:52:48 compute-0 ceph-mon[74335]: 9.d scrub starts
Jan 23 09:52:48 compute-0 ceph-mon[74335]: 9.d scrub ok
Jan 23 09:52:48 compute-0 ceph-mon[74335]: 9.18 scrub starts
Jan 23 09:52:48 compute-0 ceph-mon[74335]: 9.18 scrub ok
Jan 23 09:52:48 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 23 09:52:48 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 23 09:52:48 compute-0 ceph-mon[74335]: Rados config object exists: conf-nfs.cephfs
Jan 23 09:52:48 compute-0 ceph-mon[74335]: Creating key for client.nfs.cephfs.1.0.compute-2.tykohi-rgw
Jan 23 09:52:48 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:48 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.tykohi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:48 compute-0 ceph-mon[74335]: Bind address in nfs.cephfs.1.0.compute-2.tykohi's ganesha conf is defaulting to empty
Jan 23 09:52:48 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:48 compute-0 ceph-mon[74335]: Deploying daemon nfs.cephfs.1.0.compute-2.tykohi on compute-2
Jan 23 09:52:48 compute-0 ceph-mon[74335]: osdmap e65: 3 total, 3 up, 3 in
Jan 23 09:52:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 1 active+recovering+remapped, 1 active+remapped, 8 remapped+peering, 14 active+recovery_wait+remapped, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.3 KiB/s wr, 111 op/s; 80/223 objects misplaced (35.874%); 227 B/s, 1 objects/s recovering
Jan 23 09:52:49 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 23 09:52:49 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 23 09:52:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 23 09:52:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 23 09:52:50 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 23 09:52:50 compute-0 ceph-mon[74335]: 8.e scrub starts
Jan 23 09:52:50 compute-0 ceph-mon[74335]: 8.e scrub ok
Jan 23 09:52:50 compute-0 ceph-mon[74335]: 9.10 scrub starts
Jan 23 09:52:50 compute-0 ceph-mon[74335]: 9.10 scrub ok
Jan 23 09:52:50 compute-0 ceph-mon[74335]: pgmap v72: 353 pgs: 1 active+recovering+remapped, 1 active+remapped, 8 remapped+peering, 14 active+recovery_wait+remapped, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.3 KiB/s wr, 111 op/s; 80/223 objects misplaced (35.874%); 227 B/s, 1 objects/s recovering
Jan 23 09:52:50 compute-0 ceph-mon[74335]: 9.c scrub starts
Jan 23 09:52:50 compute-0 ceph-mon[74335]: 9.c scrub ok
Jan 23 09:52:50 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.1 deep-scrub starts
Jan 23 09:52:50 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.1 deep-scrub ok
Jan 23 09:52:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:52:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:52:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:52:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:50 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.fenqiu
Jan 23 09:52:50 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.fenqiu
Jan 23 09:52:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 23 09:52:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 23 09:52:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 23 09:52:50 compute-0 ceph-mgr[74633]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 23 09:52:50 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 23 09:52:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 23 09:52:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 23 09:52:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 23 09:52:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 11 peering, 8 remapped+peering, 5 active+recovery_wait+remapped, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.6 KiB/s wr, 69 op/s; 29/223 objects misplaced (13.004%); 240 B/s, 13 objects/s recovering
Jan 23 09:52:51 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 23 09:52:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:51 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 23 09:52:51 compute-0 ceph-mon[74335]: osdmap e66: 3 total, 3 up, 3 in
Jan 23 09:52:51 compute-0 ceph-mon[74335]: 8.1 deep-scrub starts
Jan 23 09:52:51 compute-0 ceph-mon[74335]: 8.1 deep-scrub ok
Jan 23 09:52:51 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:51 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:51 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:51 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 23 09:52:51 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 23 09:52:51 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 23 09:52:51 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=0/0 n=6 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=62'763 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.2( v 62'759 (0'0,62'759] local-lis/les=0/0 n=6 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=62'759 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=0/0 n=6 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'763 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.2( v 62'759 (0'0,62'759] local-lis/les=0/0 n=6 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'759 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=0/0 n=6 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=62'764 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=0/0 n=6 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'764 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=0/0 n=5 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=62'763 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=0/0 n=7 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=62'773 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=0/0 n=7 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'773 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=0/0 n=4 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=61'760 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=0/0 n=4 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=61'760 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=0/0 n=4 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=58'754 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=0/0 n=4 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=61'756 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=0/0 n=4 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=61'756 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=0/0 n=5 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'763 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:51 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 67 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=0/0 n=4 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=58'754 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:52:51 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 23 09:52:51 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 23 09:52:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 23 09:52:52 compute-0 ceph-mon[74335]: Creating key for client.nfs.cephfs.2.0.compute-0.fenqiu
Jan 23 09:52:52 compute-0 ceph-mon[74335]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 23 09:52:52 compute-0 ceph-mon[74335]: pgmap v74: 353 pgs: 11 peering, 8 remapped+peering, 5 active+recovery_wait+remapped, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.6 KiB/s wr, 69 op/s; 29/223 objects misplaced (13.004%); 240 B/s, 13 objects/s recovering
Jan 23 09:52:52 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 23 09:52:52 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:52 compute-0 ceph-mon[74335]: osdmap e67: 3 total, 3 up, 3 in
Jan 23 09:52:52 compute-0 ceph-mon[74335]: 9.0 scrub starts
Jan 23 09:52:52 compute-0 ceph-mon[74335]: 9.0 scrub ok
Jan 23 09:52:52 compute-0 ceph-mon[74335]: 10.17 scrub starts
Jan 23 09:52:52 compute-0 ceph-mon[74335]: 10.17 scrub ok
Jan 23 09:52:52 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Jan 23 09:52:52 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Jan 23 09:52:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 23 09:52:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 23 09:52:52 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 68 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=67/68 n=4 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=58'754 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:52 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 68 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=67/68 n=7 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'773 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:52 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 68 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=67/68 n=6 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'764 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:52 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 68 pg[10.2( v 62'759 (0'0,62'759] local-lis/les=67/68 n=6 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'759 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:52 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 68 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=67/68 n=6 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'763 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:52 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 68 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=67/68 n=4 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=61'756 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:52 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 68 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=67/68 n=5 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=62'763 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:52 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 68 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=67/68 n=4 ec=59/46 lis/c=65/59 les/c/f=66/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=61'760 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:52:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 11 peering, 8 remapped+peering, 5 active+recovery_wait+remapped, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.6 KiB/s wr, 68 op/s; 29/223 objects misplaced (13.004%); 239 B/s, 12 objects/s recovering
Jan 23 09:52:53 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 23 09:52:53 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 23 09:52:53 compute-0 ceph-mon[74335]: 9.1 deep-scrub starts
Jan 23 09:52:53 compute-0 ceph-mon[74335]: 9.1 deep-scrub ok
Jan 23 09:52:53 compute-0 ceph-mon[74335]: osdmap e68: 3 total, 3 up, 3 in
Jan 23 09:52:53 compute-0 ceph-mon[74335]: 10.7 scrub starts
Jan 23 09:52:53 compute-0 ceph-mon[74335]: 10.7 scrub ok
Jan 23 09:52:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:52:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 23 09:52:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 23 09:52:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 23 09:52:54 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 23 09:52:54 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 23 09:52:54 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 23 09:52:54 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 23 09:52:54 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.fenqiu-rgw
Jan 23 09:52:54 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.fenqiu-rgw
Jan 23 09:52:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 23 09:52:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:54 compute-0 ceph-mgr[74633]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.fenqiu's ganesha conf is defaulting to empty
Jan 23 09:52:54 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.fenqiu's ganesha conf is defaulting to empty
Jan 23 09:52:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:52:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:54 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.fenqiu on compute-0
Jan 23 09:52:54 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.fenqiu on compute-0
Jan 23 09:52:54 compute-0 sudo[94791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:52:54 compute-0 sudo[94791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:52:54 compute-0 sudo[94791]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:54 compute-0 ceph-mon[74335]: pgmap v77: 353 pgs: 11 peering, 8 remapped+peering, 5 active+recovery_wait+remapped, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.6 KiB/s wr, 68 op/s; 29/223 objects misplaced (13.004%); 239 B/s, 12 objects/s recovering
Jan 23 09:52:54 compute-0 ceph-mon[74335]: 8.0 scrub starts
Jan 23 09:52:54 compute-0 ceph-mon[74335]: 8.0 scrub ok
Jan 23 09:52:54 compute-0 ceph-mon[74335]: 10.5 scrub starts
Jan 23 09:52:54 compute-0 ceph-mon[74335]: 10.5 scrub ok
Jan 23 09:52:54 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 23 09:52:54 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 23 09:52:54 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 09:52:54 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fenqiu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 09:52:54 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:52:54 compute-0 sudo[94816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:52:54 compute-0 sudo[94816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:52:54 compute-0 podman[94880]: 2026-01-23 09:52:54.949284666 +0000 UTC m=+0.046771674 container create ce9f3afbeeeb6336a81812a863df74584b5c5882784d0db6932eb922e894bf48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:52:54 compute-0 systemd[1]: Started libpod-conmon-ce9f3afbeeeb6336a81812a863df74584b5c5882784d0db6932eb922e894bf48.scope.
Jan 23 09:52:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:52:55 compute-0 podman[94880]: 2026-01-23 09:52:54.927555254 +0000 UTC m=+0.025042282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:52:55 compute-0 podman[94880]: 2026-01-23 09:52:55.032893371 +0000 UTC m=+0.130380399 container init ce9f3afbeeeb6336a81812a863df74584b5c5882784d0db6932eb922e894bf48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 23 09:52:55 compute-0 podman[94880]: 2026-01-23 09:52:55.039570443 +0000 UTC m=+0.137057461 container start ce9f3afbeeeb6336a81812a863df74584b5c5882784d0db6932eb922e894bf48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:52:55 compute-0 blissful_robinson[94896]: 167 167
Jan 23 09:52:55 compute-0 systemd[1]: libpod-ce9f3afbeeeb6336a81812a863df74584b5c5882784d0db6932eb922e894bf48.scope: Deactivated successfully.
Jan 23 09:52:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 11 peering, 5 active+recovery_wait+remapped, 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.3 KiB/s wr, 3 op/s; 29/222 objects misplaced (13.063%); 318 B/s, 15 objects/s recovering
Jan 23 09:52:55 compute-0 podman[94880]: 2026-01-23 09:52:55.043277884 +0000 UTC m=+0.140764892 container attach ce9f3afbeeeb6336a81812a863df74584b5c5882784d0db6932eb922e894bf48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:52:55 compute-0 podman[94880]: 2026-01-23 09:52:55.047061727 +0000 UTC m=+0.144548735 container died ce9f3afbeeeb6336a81812a863df74584b5c5882784d0db6932eb922e894bf48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-78eaa4f7d93a82ffe9458407aa3f9e5366f020eff19ba1552ffb8776a46cbc58-merged.mount: Deactivated successfully.
Jan 23 09:52:55 compute-0 podman[94880]: 2026-01-23 09:52:55.092674948 +0000 UTC m=+0.190161956 container remove ce9f3afbeeeb6336a81812a863df74584b5c5882784d0db6932eb922e894bf48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:52:55 compute-0 systemd[1]: libpod-conmon-ce9f3afbeeeb6336a81812a863df74584b5c5882784d0db6932eb922e894bf48.scope: Deactivated successfully.
Jan 23 09:52:55 compute-0 systemd[1]: Reloading.
Jan 23 09:52:55 compute-0 systemd-rc-local-generator[94941]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:52:55 compute-0 systemd-sysv-generator[94944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:52:55 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Jan 23 09:52:55 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Jan 23 09:52:55 compute-0 systemd[1]: Reloading.
Jan 23 09:52:55 compute-0 systemd-sysv-generator[94985]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:52:55 compute-0 systemd-rc-local-generator[94981]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:52:55 compute-0 ceph-mon[74335]: 8.7 scrub starts
Jan 23 09:52:55 compute-0 ceph-mon[74335]: 8.7 scrub ok
Jan 23 09:52:55 compute-0 ceph-mon[74335]: Rados config object exists: conf-nfs.cephfs
Jan 23 09:52:55 compute-0 ceph-mon[74335]: Creating key for client.nfs.cephfs.2.0.compute-0.fenqiu-rgw
Jan 23 09:52:55 compute-0 ceph-mon[74335]: Bind address in nfs.cephfs.2.0.compute-0.fenqiu's ganesha conf is defaulting to empty
Jan 23 09:52:55 compute-0 ceph-mon[74335]: Deploying daemon nfs.cephfs.2.0.compute-0.fenqiu on compute-0
Jan 23 09:52:55 compute-0 ceph-mon[74335]: 9.3 scrub starts
Jan 23 09:52:55 compute-0 ceph-mon[74335]: 9.3 scrub ok
Jan 23 09:52:55 compute-0 ceph-mon[74335]: 8.17 deep-scrub starts
Jan 23 09:52:55 compute-0 ceph-mon[74335]: 8.17 deep-scrub ok
Jan 23 09:52:55 compute-0 ceph-mon[74335]: pgmap v78: 353 pgs: 11 peering, 5 active+recovery_wait+remapped, 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.3 KiB/s wr, 3 op/s; 29/222 objects misplaced (13.063%); 318 B/s, 15 objects/s recovering
Jan 23 09:52:55 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:52:55 compute-0 podman[95037]: 2026-01-23 09:52:55.933098691 +0000 UTC m=+0.046123637 container create bd89f1243d2eeec95b4e706e560db0d4f07fe842ddf566993f13eeee07fb7987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 09:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13260c323821708d5d7a8166da641e5eea1cb9d0de01fd29d8822dc04af91ce0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13260c323821708d5d7a8166da641e5eea1cb9d0de01fd29d8822dc04af91ce0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13260c323821708d5d7a8166da641e5eea1cb9d0de01fd29d8822dc04af91ce0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13260c323821708d5d7a8166da641e5eea1cb9d0de01fd29d8822dc04af91ce0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:52:56 compute-0 podman[95037]: 2026-01-23 09:52:56.001946244 +0000 UTC m=+0.114971210 container init bd89f1243d2eeec95b4e706e560db0d4f07fe842ddf566993f13eeee07fb7987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:52:56 compute-0 podman[95037]: 2026-01-23 09:52:55.911515873 +0000 UTC m=+0.024540849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:52:56 compute-0 podman[95037]: 2026-01-23 09:52:56.007881876 +0000 UTC m=+0.120906832 container start bd89f1243d2eeec95b4e706e560db0d4f07fe842ddf566993f13eeee07fb7987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 09:52:56 compute-0 bash[95037]: bd89f1243d2eeec95b4e706e560db0d4f07fe842ddf566993f13eeee07fb7987
Jan 23 09:52:56 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 09:52:56 compute-0 sudo[94816]: pam_unix(sudo:session): session closed for user root
Jan 23 09:52:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:52:56 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 09:52:56 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 23 09:52:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 23 09:52:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:52:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:56 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 9e0d73cd-767d-4d91-90b4-c0139b77151f (Updating nfs.cephfs deployment (+3 -> 3))
Jan 23 09:52:56 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 9e0d73cd-767d-4d91-90b4-c0139b77151f (Updating nfs.cephfs deployment (+3 -> 3)) in 18 seconds
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 09:52:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 09:52:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:52:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 09:52:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:56 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev d358dc74-4710-4dba-83e4-bef606d6850f (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 23 09:52:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Jan 23 09:52:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:56 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.mnxlgm on compute-1
Jan 23 09:52:56 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.mnxlgm on compute-1
Jan 23 09:52:56 compute-0 ceph-mon[74335]: 9.4 deep-scrub starts
Jan 23 09:52:56 compute-0 ceph-mon[74335]: 9.4 deep-scrub ok
Jan 23 09:52:56 compute-0 ceph-mon[74335]: 8.1f scrub starts
Jan 23 09:52:56 compute-0 ceph-mon[74335]: 8.1f scrub ok
Jan 23 09:52:56 compute-0 ceph-mon[74335]: 9.e scrub starts
Jan 23 09:52:56 compute-0 ceph-mon[74335]: 9.e scrub ok
Jan 23 09:52:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:52:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 296 B/s, 15 objects/s recovering
Jan 23 09:52:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 23 09:52:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 23 09:52:57 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 23 09:52:57 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 23 09:52:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 23 09:52:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 23 09:52:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 23 09:52:57 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 23 09:52:57 compute-0 ceph-mon[74335]: 9.1a scrub starts
Jan 23 09:52:57 compute-0 ceph-mon[74335]: 9.1a scrub ok
Jan 23 09:52:57 compute-0 ceph-mon[74335]: 8.5 deep-scrub starts
Jan 23 09:52:57 compute-0 ceph-mon[74335]: 8.5 deep-scrub ok
Jan 23 09:52:57 compute-0 ceph-mon[74335]: Deploying daemon haproxy.nfs.cephfs.compute-1.mnxlgm on compute-1
Jan 23 09:52:57 compute-0 ceph-mon[74335]: 9.15 scrub starts
Jan 23 09:52:57 compute-0 ceph-mon[74335]: 9.15 scrub ok
Jan 23 09:52:57 compute-0 ceph-mon[74335]: pgmap v79: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 296 B/s, 15 objects/s recovering
Jan 23 09:52:57 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 23 09:52:58 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 23 09:52:58 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 23 09:52:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:52:58 compute-0 ceph-mon[74335]: 9.1b scrub starts
Jan 23 09:52:58 compute-0 ceph-mon[74335]: 9.1b scrub ok
Jan 23 09:52:58 compute-0 ceph-mon[74335]: 8.2 deep-scrub starts
Jan 23 09:52:58 compute-0 ceph-mon[74335]: 8.2 deep-scrub ok
Jan 23 09:52:58 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 23 09:52:58 compute-0 ceph-mon[74335]: osdmap e69: 3 total, 3 up, 3 in
Jan 23 09:52:58 compute-0 ceph-mon[74335]: 8.1b scrub starts
Jan 23 09:52:58 compute-0 ceph-mon[74335]: 8.1b scrub ok
Jan 23 09:52:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 6 objects/s recovering
Jan 23 09:52:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 23 09:52:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 23 09:52:59 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 23 09:52:59 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 23 09:52:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 23 09:52:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 23 09:52:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 23 09:52:59 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 23 09:52:59 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 21 completed events
Jan 23 09:52:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:52:59 compute-0 ceph-mon[74335]: 8.1a scrub starts
Jan 23 09:52:59 compute-0 ceph-mon[74335]: 8.1a scrub ok
Jan 23 09:52:59 compute-0 ceph-mon[74335]: 8.6 scrub starts
Jan 23 09:52:59 compute-0 ceph-mon[74335]: 8.6 scrub ok
Jan 23 09:52:59 compute-0 ceph-mon[74335]: 8.4 scrub starts
Jan 23 09:52:59 compute-0 ceph-mon[74335]: 8.4 scrub ok
Jan 23 09:52:59 compute-0 ceph-mon[74335]: pgmap v81: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 92 B/s, 6 objects/s recovering
Jan 23 09:52:59 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 23 09:53:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:00 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 98b6ac24-8057-4189-8582-2bd10be25c2e (Global Recovery Event) in 15 seconds
Jan 23 09:53:00 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 23 09:53:00 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 23 09:53:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 90 B/s, 6 objects/s recovering
Jan 23 09:53:01 compute-0 ceph-mon[74335]: 9.19 scrub starts
Jan 23 09:53:01 compute-0 ceph-mon[74335]: 9.19 scrub ok
Jan 23 09:53:01 compute-0 ceph-mon[74335]: 8.11 scrub starts
Jan 23 09:53:01 compute-0 ceph-mon[74335]: 8.11 scrub ok
Jan 23 09:53:01 compute-0 ceph-mon[74335]: 8.18 scrub starts
Jan 23 09:53:01 compute-0 ceph-mon[74335]: 8.18 scrub ok
Jan 23 09:53:01 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 23 09:53:01 compute-0 ceph-mon[74335]: osdmap e70: 3 total, 3 up, 3 in
Jan 23 09:53:01 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:01 compute-0 ceph-mon[74335]: 8.12 scrub starts
Jan 23 09:53:01 compute-0 ceph-mon[74335]: 8.12 scrub ok
Jan 23 09:53:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 23 09:53:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 23 09:53:01 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 23 09:53:01 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 23 09:53:01 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 23 09:53:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:53:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 23 09:53:02 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 23 09:53:02 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 23 09:53:02 compute-0 ceph-mon[74335]: 9.1e scrub starts
Jan 23 09:53:02 compute-0 ceph-mon[74335]: 9.1e scrub ok
Jan 23 09:53:02 compute-0 ceph-mon[74335]: 8.b scrub starts
Jan 23 09:53:02 compute-0 ceph-mon[74335]: 8.b scrub ok
Jan 23 09:53:02 compute-0 ceph-mon[74335]: pgmap v83: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 90 B/s, 6 objects/s recovering
Jan 23 09:53:02 compute-0 ceph-mon[74335]: osdmap e71: 3 total, 3 up, 3 in
Jan 23 09:53:02 compute-0 ceph-mon[74335]: 9.1f scrub starts
Jan 23 09:53:02 compute-0 ceph-mon[74335]: 9.1f scrub ok
Jan 23 09:53:02 compute-0 ceph-mon[74335]: 9.12 scrub starts
Jan 23 09:53:02 compute-0 ceph-mon[74335]: 9.12 scrub ok
Jan 23 09:53:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 23 09:53:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:02 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 23 09:53:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:53:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 23 09:53:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:02 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.yeogal on compute-0
Jan 23 09:53:02 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.yeogal on compute-0
Jan 23 09:53:02 compute-0 sudo[95105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:53:02 compute-0 sudo[95105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:02 compute-0 sudo[95105]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:03 compute-0 sudo[95130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:53:03 compute-0 sudo[95130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:53:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:53:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:53:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:53:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:53:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:53:03 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 23 09:53:03 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 23 09:53:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:03 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:03 compute-0 ceph-mon[74335]: 8.c scrub starts
Jan 23 09:53:03 compute-0 ceph-mon[74335]: 8.c scrub ok
Jan 23 09:53:03 compute-0 ceph-mon[74335]: 8.1e scrub starts
Jan 23 09:53:03 compute-0 ceph-mon[74335]: 8.1e scrub ok
Jan 23 09:53:03 compute-0 ceph-mon[74335]: 9.5 deep-scrub starts
Jan 23 09:53:03 compute-0 ceph-mon[74335]: 9.5 deep-scrub ok
Jan 23 09:53:03 compute-0 ceph-mon[74335]: 8.19 scrub starts
Jan 23 09:53:03 compute-0 ceph-mon[74335]: 8.19 scrub ok
Jan 23 09:53:03 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:03 compute-0 ceph-mon[74335]: osdmap e72: 3 total, 3 up, 3 in
Jan 23 09:53:03 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:03 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:53:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 23 09:53:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 23 09:53:03 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 23 09:53:04 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 23 09:53:04 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 23 09:53:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 23 09:53:04 compute-0 ceph-mon[74335]: Deploying daemon haproxy.nfs.cephfs.compute-0.yeogal on compute-0
Jan 23 09:53:04 compute-0 ceph-mon[74335]: pgmap v86: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:04 compute-0 ceph-mon[74335]: 9.1c scrub starts
Jan 23 09:53:04 compute-0 ceph-mon[74335]: 9.1c scrub ok
Jan 23 09:53:04 compute-0 ceph-mon[74335]: 9.1d scrub starts
Jan 23 09:53:04 compute-0 ceph-mon[74335]: 9.1d scrub ok
Jan 23 09:53:04 compute-0 ceph-mon[74335]: osdmap e73: 3 total, 3 up, 3 in
Jan 23 09:53:04 compute-0 ceph-mon[74335]: 8.8 scrub starts
Jan 23 09:53:04 compute-0 ceph-mon[74335]: 8.8 scrub ok
Jan 23 09:53:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 23 09:53:04 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 23 09:53:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:05 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 22 completed events
Jan 23 09:53:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:53:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:05 compute-0 ceph-mgr[74633]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Jan 23 09:53:05 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 23 09:53:05 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 23 09:53:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:05 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:05 compute-0 ceph-mon[74335]: 8.1d scrub starts
Jan 23 09:53:05 compute-0 ceph-mon[74335]: 8.1d scrub ok
Jan 23 09:53:05 compute-0 ceph-mon[74335]: 8.1c scrub starts
Jan 23 09:53:05 compute-0 ceph-mon[74335]: 8.1c scrub ok
Jan 23 09:53:05 compute-0 ceph-mon[74335]: 12.15 deep-scrub starts
Jan 23 09:53:05 compute-0 ceph-mon[74335]: 12.15 deep-scrub ok
Jan 23 09:53:05 compute-0 ceph-mon[74335]: osdmap e74: 3 total, 3 up, 3 in
Jan 23 09:53:05 compute-0 ceph-mon[74335]: pgmap v89: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:05 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:06 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 23 09:53:06 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 23 09:53:06 compute-0 ceph-mon[74335]: 8.13 scrub starts
Jan 23 09:53:06 compute-0 ceph-mon[74335]: 8.13 scrub ok
Jan 23 09:53:06 compute-0 ceph-mon[74335]: 9.b deep-scrub starts
Jan 23 09:53:06 compute-0 ceph-mon[74335]: 9.b deep-scrub ok
Jan 23 09:53:06 compute-0 ceph-mon[74335]: 12.f scrub starts
Jan 23 09:53:06 compute-0 ceph-mon[74335]: 12.f scrub ok
Jan 23 09:53:06 compute-0 ceph-mon[74335]: 12.d scrub starts
Jan 23 09:53:06 compute-0 ceph-mon[74335]: 12.d scrub ok
Jan 23 09:53:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 531 B/s wr, 53 op/s; 80 B/s, 4 objects/s recovering
Jan 23 09:53:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 23 09:53:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 23 09:53:07 compute-0 podman[95196]: 2026-01-23 09:53:07.310124189 +0000 UTC m=+3.819185432 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 23 09:53:07 compute-0 podman[95196]: 2026-01-23 09:53:07.327887923 +0000 UTC m=+3.836949146 container create cbf6be955babdffbc2a62ffae52c63f4adc3b797b1c595bc7a8328c16c51b6b1 (image=quay.io/ceph/haproxy:2.3, name=fervent_sutherland)
Jan 23 09:53:07 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 23 09:53:07 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 23 09:53:07 compute-0 systemd[1]: Started libpod-conmon-cbf6be955babdffbc2a62ffae52c63f4adc3b797b1c595bc7a8328c16c51b6b1.scope.
Jan 23 09:53:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:53:07 compute-0 podman[95196]: 2026-01-23 09:53:07.424630407 +0000 UTC m=+3.933691640 container init cbf6be955babdffbc2a62ffae52c63f4adc3b797b1c595bc7a8328c16c51b6b1 (image=quay.io/ceph/haproxy:2.3, name=fervent_sutherland)
Jan 23 09:53:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:07 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:07 compute-0 podman[95196]: 2026-01-23 09:53:07.432283486 +0000 UTC m=+3.941344709 container start cbf6be955babdffbc2a62ffae52c63f4adc3b797b1c595bc7a8328c16c51b6b1 (image=quay.io/ceph/haproxy:2.3, name=fervent_sutherland)
Jan 23 09:53:07 compute-0 podman[95196]: 2026-01-23 09:53:07.435971811 +0000 UTC m=+3.945033034 container attach cbf6be955babdffbc2a62ffae52c63f4adc3b797b1c595bc7a8328c16c51b6b1 (image=quay.io/ceph/haproxy:2.3, name=fervent_sutherland)
Jan 23 09:53:07 compute-0 fervent_sutherland[95310]: 0 0
Jan 23 09:53:07 compute-0 systemd[1]: libpod-cbf6be955babdffbc2a62ffae52c63f4adc3b797b1c595bc7a8328c16c51b6b1.scope: Deactivated successfully.
Jan 23 09:53:07 compute-0 podman[95196]: 2026-01-23 09:53:07.43885438 +0000 UTC m=+3.947915603 container died cbf6be955babdffbc2a62ffae52c63f4adc3b797b1c595bc7a8328c16c51b6b1 (image=quay.io/ceph/haproxy:2.3, name=fervent_sutherland)
Jan 23 09:53:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-add930a537ce0e04e5c96e6a644ea8da38a547e57db5606ac758aff2797f341e-merged.mount: Deactivated successfully.
Jan 23 09:53:07 compute-0 podman[95196]: 2026-01-23 09:53:07.483526462 +0000 UTC m=+3.992587685 container remove cbf6be955babdffbc2a62ffae52c63f4adc3b797b1c595bc7a8328c16c51b6b1 (image=quay.io/ceph/haproxy:2.3, name=fervent_sutherland)
Jan 23 09:53:07 compute-0 systemd[1]: libpod-conmon-cbf6be955babdffbc2a62ffae52c63f4adc3b797b1c595bc7a8328c16c51b6b1.scope: Deactivated successfully.
Jan 23 09:53:07 compute-0 systemd[1]: Reloading.
Jan 23 09:53:07 compute-0 systemd-rc-local-generator[95352]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:07 compute-0 systemd-sysv-generator[95358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:07 compute-0 systemd[1]: Reloading.
Jan 23 09:53:07 compute-0 systemd-sysv-generator[95401]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:07 compute-0 systemd-rc-local-generator[95398]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 23 09:53:08 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 23 09:53:08 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 23 09:53:08 compute-0 ceph-mon[74335]: 11.0 scrub starts
Jan 23 09:53:08 compute-0 ceph-mon[74335]: 11.0 scrub ok
Jan 23 09:53:08 compute-0 ceph-mon[74335]: 8.f scrub starts
Jan 23 09:53:08 compute-0 ceph-mon[74335]: 8.f scrub ok
Jan 23 09:53:08 compute-0 ceph-mon[74335]: pgmap v90: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 531 B/s wr, 53 op/s; 80 B/s, 4 objects/s recovering
Jan 23 09:53:08 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 23 09:53:08 compute-0 ceph-mon[74335]: 12.5 scrub starts
Jan 23 09:53:08 compute-0 ceph-mon[74335]: 12.5 scrub ok
Jan 23 09:53:08 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.yeogal for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:53:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 23 09:53:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 23 09:53:08 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 75 pg[10.1d( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=75) [1] r=0 lpr=75 pi=[67,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 75 pg[10.5( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=66/66 les/c/f=67/67/0 sis=75) [1] r=0 lpr=75 pi=[66,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 75 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=75) [1] r=0 lpr=75 pi=[67,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 75 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=75) [1] r=0 lpr=75 pi=[67,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:53:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 23 09:53:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 23 09:53:08 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 76 pg[10.5( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=66/66 les/c/f=67/67/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 76 pg[10.1d( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[67,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 76 pg[10.5( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=66/66 les/c/f=67/67/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 76 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[67,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 76 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[67,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 76 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[67,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 76 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[67,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 76 pg[10.1d( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=76) [1]/[2] r=-1 lpr=76 pi=[67,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:08 compute-0 podman[95450]: 2026-01-23 09:53:08.671981747 +0000 UTC m=+0.042399902 container create 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/915da6ae77c99293a7245b9b84b0c8f5a3e0d7b42a42aa6d7d829c17c3a6bb4e/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:08 compute-0 podman[95450]: 2026-01-23 09:53:08.729812039 +0000 UTC m=+0.100230234 container init 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 09:53:08 compute-0 podman[95450]: 2026-01-23 09:53:08.735184096 +0000 UTC m=+0.105602251 container start 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 09:53:08 compute-0 bash[95450]: 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178
Jan 23 09:53:08 compute-0 podman[95450]: 2026-01-23 09:53:08.654141131 +0000 UTC m=+0.024559306 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 23 09:53:08 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.yeogal for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:53:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [NOTICE] 022/095308 (2) : New worker #1 (4) forked
Jan 23 09:53:08 compute-0 sudo[95130]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:53:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:53:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 23 09:53:08 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:08 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.bbaqsj on compute-2
Jan 23 09:53:08 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.bbaqsj on compute-2
Jan 23 09:53:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 556 B/s wr, 56 op/s; 84 B/s, 4 objects/s recovering
Jan 23 09:53:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 23 09:53:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 23 09:53:09 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 23 09:53:09 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 23 09:53:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:09 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 11.c scrub starts
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 11.c scrub ok
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 8.9 scrub starts
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 8.9 scrub ok
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 11.b scrub starts
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 11.b scrub ok
Jan 23 09:53:09 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 23 09:53:09 compute-0 ceph-mon[74335]: osdmap e75: 3 total, 3 up, 3 in
Jan 23 09:53:09 compute-0 ceph-mon[74335]: osdmap e76: 3 total, 3 up, 3 in
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 12.0 scrub starts
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 12.0 scrub ok
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 9.8 scrub starts
Jan 23 09:53:09 compute-0 ceph-mon[74335]: 9.8 scrub ok
Jan 23 09:53:09 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:09 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:09 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:09 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 23 09:53:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 23 09:53:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 23 09:53:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 23 09:53:09 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 23 09:53:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 77 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=77 pruub=14.743757248s) [0] r=-1 lpr=77 pi=[67,77)/1 crt=58'754 mlcod 0'0 active pruub 222.433013916s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 77 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=77 pruub=14.743721962s) [0] r=-1 lpr=77 pi=[67,77)/1 crt=58'754 mlcod 0'0 unknown NOTIFY pruub 222.433013916s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 77 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=67/68 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=77 pruub=14.746008873s) [0] r=-1 lpr=77 pi=[67,77)/1 crt=62'764 mlcod 0'0 active pruub 222.436447144s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 77 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=67/68 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=77 pruub=14.745985031s) [0] r=-1 lpr=77 pi=[67,77)/1 crt=62'764 mlcod 0'0 unknown NOTIFY pruub 222.436447144s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 77 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=67/68 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=77 pruub=14.745744705s) [0] r=-1 lpr=77 pi=[67,77)/1 crt=62'763 mlcod 0'0 active pruub 222.436492920s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 77 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=67/68 n=5 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=77 pruub=14.745555878s) [0] r=-1 lpr=77 pi=[67,77)/1 crt=62'763 mlcod 0'0 active pruub 222.436477661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 77 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=67/68 n=5 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=77 pruub=14.745537758s) [0] r=-1 lpr=77 pi=[67,77)/1 crt=62'763 mlcod 0'0 unknown NOTIFY pruub 222.436477661s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:09 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 77 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=67/68 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=77 pruub=14.745675087s) [0] r=-1 lpr=77 pi=[67,77)/1 crt=62'763 mlcod 0'0 unknown NOTIFY pruub 222.436492920s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:09 compute-0 sudo[95502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwbipeuwyfrdbrtrjtadlpaenquiccpp ; /usr/bin/python3'
Jan 23 09:53:09 compute-0 sudo[95502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:53:09 compute-0 python3[95504]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:53:09 compute-0 podman[95505]: 2026-01-23 09:53:09.957243217 +0000 UTC m=+0.041703130 container create 674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002 (image=quay.io/ceph/ceph:v19, name=practical_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:53:09 compute-0 systemd[1]: Started libpod-conmon-674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002.scope.
Jan 23 09:53:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83cb2fc2f3fbe28bbd98bc8e5a333f4a917773b76d9640886eb7f2b96133582/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83cb2fc2f3fbe28bbd98bc8e5a333f4a917773b76d9640886eb7f2b96133582/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:10 compute-0 podman[95505]: 2026-01-23 09:53:09.940498586 +0000 UTC m=+0.024958499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:53:10 compute-0 podman[95505]: 2026-01-23 09:53:10.036304221 +0000 UTC m=+0.120764144 container init 674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002 (image=quay.io/ceph/ceph:v19, name=practical_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 09:53:10 compute-0 podman[95505]: 2026-01-23 09:53:10.044370812 +0000 UTC m=+0.128830725 container start 674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002 (image=quay.io/ceph/ceph:v19, name=practical_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:53:10 compute-0 podman[95505]: 2026-01-23 09:53:10.048035447 +0000 UTC m=+0.132495360 container attach 674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002 (image=quay.io/ceph/ceph:v19, name=practical_keller, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 23 09:53:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:10 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:10 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 1085f8ee-fa9c-4e41-a396-78f0c2857f3c (Global Recovery Event) in 5 seconds
Jan 23 09:53:10 compute-0 practical_keller[95520]: could not fetch user info: no user info saved
Jan 23 09:53:10 compute-0 systemd[1]: libpod-674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002.scope: Deactivated successfully.
Jan 23 09:53:10 compute-0 conmon[95520]: conmon 674c9006e7308b81c905 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002.scope/container/memory.events
Jan 23 09:53:10 compute-0 podman[95505]: 2026-01-23 09:53:10.336141745 +0000 UTC m=+0.420601668 container died 674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002 (image=quay.io/ceph/ceph:v19, name=practical_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:53:10 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 23 09:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c83cb2fc2f3fbe28bbd98bc8e5a333f4a917773b76d9640886eb7f2b96133582-merged.mount: Deactivated successfully.
Jan 23 09:53:10 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 23 09:53:10 compute-0 podman[95505]: 2026-01-23 09:53:10.380713483 +0000 UTC m=+0.465173396 container remove 674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002 (image=quay.io/ceph/ceph:v19, name=practical_keller, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:53:10 compute-0 systemd[1]: libpod-conmon-674c9006e7308b81c90589465cc52b46c1de87cb15ce0c260a6d21a3bbe20002.scope: Deactivated successfully.
Jan 23 09:53:10 compute-0 sudo[95502]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:10 compute-0 ceph-mon[74335]: Deploying daemon haproxy.nfs.cephfs.compute-2.bbaqsj on compute-2
Jan 23 09:53:10 compute-0 ceph-mon[74335]: pgmap v93: 353 pgs: 353 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 556 B/s wr, 56 op/s; 84 B/s, 4 objects/s recovering
Jan 23 09:53:10 compute-0 ceph-mon[74335]: 11.9 scrub starts
Jan 23 09:53:10 compute-0 ceph-mon[74335]: 11.9 scrub ok
Jan 23 09:53:10 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 23 09:53:10 compute-0 ceph-mon[74335]: osdmap e77: 3 total, 3 up, 3 in
Jan 23 09:53:10 compute-0 ceph-mon[74335]: 12.1f scrub starts
Jan 23 09:53:10 compute-0 ceph-mon[74335]: 12.1f scrub ok
Jan 23 09:53:10 compute-0 sudo[95641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdqcuwjtrckzwyfmeucohydargwrcjgf ; /usr/bin/python3'
Jan 23 09:53:10 compute-0 sudo[95641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:53:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 23 09:53:10 compute-0 python3[95643]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:53:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] r=0 lpr=78 pi=[67,78)/1 crt=58'754 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] r=0 lpr=78 pi=[67,78)/1 crt=58'754 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.15( v 62'759 (0'0,62'759] local-lis/les=0/0 n=5 ec=59/46 lis/c=76/67 les/c/f=77/68/0 sis=78) [1] r=0 lpr=78 pi=[67,78)/1 luod=0'0 crt=62'759 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.15( v 62'759 (0'0,62'759] local-lis/les=0/0 n=5 ec=59/46 lis/c=76/67 les/c/f=77/68/0 sis=78) [1] r=0 lpr=78 pi=[67,78)/1 crt=62'759 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=0/0 n=6 ec=59/46 lis/c=76/67 les/c/f=77/68/0 sis=78) [1] r=0 lpr=78 pi=[67,78)/1 luod=0'0 crt=62'768 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=0/0 n=6 ec=59/46 lis/c=76/67 les/c/f=77/68/0 sis=78) [1] r=0 lpr=78 pi=[67,78)/1 crt=62'768 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:10 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=67/68 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] r=0 lpr=78 pi=[67,78)/1 crt=62'764 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=67/68 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] r=0 lpr=78 pi=[67,78)/1 crt=62'764 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.5( v 77'776 (0'0,77'776] local-lis/les=0/0 n=8 ec=59/46 lis/c=76/66 les/c/f=77/67/0 sis=78) [1] r=0 lpr=78 pi=[66,78)/1 luod=0'0 crt=68'773 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.5( v 77'776 (0'0,77'776] local-lis/les=0/0 n=8 ec=59/46 lis/c=76/66 les/c/f=77/67/0 sis=78) [1] r=0 lpr=78 pi=[66,78)/1 crt=68'773 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=67/68 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] r=0 lpr=78 pi=[67,78)/1 crt=62'763 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=67/68 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] r=0 lpr=78 pi=[67,78)/1 crt=62'763 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=0/0 n=6 ec=59/46 lis/c=76/67 les/c/f=77/68/0 sis=78) [1] r=0 lpr=78 pi=[67,78)/1 luod=0'0 crt=62'764 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=0/0 n=6 ec=59/46 lis/c=76/67 les/c/f=77/68/0 sis=78) [1] r=0 lpr=78 pi=[67,78)/1 crt=62'764 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=67/68 n=5 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] r=0 lpr=78 pi=[67,78)/1 crt=62'763 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 78 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=67/68 n=5 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] r=0 lpr=78 pi=[67,78)/1 crt=62'763 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:10 compute-0 podman[95644]: 2026-01-23 09:53:10.767708563 +0000 UTC m=+0.048707179 container create 1c60e9a9574541f61bb95eec98d128fd367a6de690c5bb9bb85be60582e3cefb (image=quay.io/ceph/ceph:v19, name=hardcore_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 09:53:10 compute-0 systemd[1]: Started libpod-conmon-1c60e9a9574541f61bb95eec98d128fd367a6de690c5bb9bb85be60582e3cefb.scope.
Jan 23 09:53:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:53:10 compute-0 podman[95644]: 2026-01-23 09:53:10.748964129 +0000 UTC m=+0.029962755 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/971a25292daa13f1dea94477ad4091da98416059eaee845a30502db82bfff546/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/971a25292daa13f1dea94477ad4091da98416059eaee845a30502db82bfff546/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:10 compute-0 podman[95644]: 2026-01-23 09:53:10.860833215 +0000 UTC m=+0.141831851 container init 1c60e9a9574541f61bb95eec98d128fd367a6de690c5bb9bb85be60582e3cefb (image=quay.io/ceph/ceph:v19, name=hardcore_swirles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 09:53:10 compute-0 podman[95644]: 2026-01-23 09:53:10.868812624 +0000 UTC m=+0.149811240 container start 1c60e9a9574541f61bb95eec98d128fd367a6de690c5bb9bb85be60582e3cefb (image=quay.io/ceph/ceph:v19, name=hardcore_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:53:10 compute-0 podman[95644]: 2026-01-23 09:53:10.87350115 +0000 UTC m=+0.154499786 container attach 1c60e9a9574541f61bb95eec98d128fd367a6de690c5bb9bb85be60582e3cefb (image=quay.io/ceph/ceph:v19, name=hardcore_swirles, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:53:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 4 active+remapped, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s; 148 B/s, 6 objects/s recovering
Jan 23 09:53:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 23 09:53:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 23 09:53:11 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]: {
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "user_id": "openstack",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "display_name": "openstack",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "email": "",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "suspended": 0,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "max_buckets": 1000,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "subusers": [],
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "keys": [
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         {
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:             "user": "openstack",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:             "access_key": "C127N8IUOXEN4564U6K0",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:             "secret_key": "BW5ZyCQuKeYS9JuxNi3gCjml9P23HfWTaclTCxBw",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:             "active": true,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:             "create_date": "2026-01-23T09:53:11.340812Z"
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         }
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     ],
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "swift_keys": [],
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "caps": [],
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "op_mask": "read, write, delete",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "default_placement": "",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "default_storage_class": "",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "placement_tags": [],
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "bucket_quota": {
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "enabled": false,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "check_on_raw": false,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "max_size": -1,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "max_size_kb": 0,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "max_objects": -1
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     },
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "user_quota": {
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "enabled": false,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "check_on_raw": false,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "max_size": -1,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "max_size_kb": 0,
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:         "max_objects": -1
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     },
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "temp_url_keys": [],
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "type": "rgw",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "mfa_ids": [],
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "account_id": "",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "path": "/",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "create_date": "2026-01-23T09:53:11.340024Z",
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "tags": [],
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]:     "group_ids": []
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]: }
Jan 23 09:53:11 compute-0 hardcore_swirles[95659]: 
Jan 23 09:53:11 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 23 09:53:11 compute-0 systemd[1]: libpod-1c60e9a9574541f61bb95eec98d128fd367a6de690c5bb9bb85be60582e3cefb.scope: Deactivated successfully.
Jan 23 09:53:11 compute-0 podman[95644]: 2026-01-23 09:53:11.424986335 +0000 UTC m=+0.705984951 container died 1c60e9a9574541f61bb95eec98d128fd367a6de690c5bb9bb85be60582e3cefb (image=quay.io/ceph/ceph:v19, name=hardcore_swirles, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 09:53:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:11 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-971a25292daa13f1dea94477ad4091da98416059eaee845a30502db82bfff546-merged.mount: Deactivated successfully.
Jan 23 09:53:11 compute-0 podman[95644]: 2026-01-23 09:53:11.478299706 +0000 UTC m=+0.759298322 container remove 1c60e9a9574541f61bb95eec98d128fd367a6de690c5bb9bb85be60582e3cefb (image=quay.io/ceph/ceph:v19, name=hardcore_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:53:11 compute-0 systemd[1]: libpod-conmon-1c60e9a9574541f61bb95eec98d128fd367a6de690c5bb9bb85be60582e3cefb.scope: Deactivated successfully.
Jan 23 09:53:11 compute-0 sudo[95641]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 23 09:53:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 23 09:53:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 23 09:53:11 compute-0 ceph-mon[74335]: 11.d scrub starts
Jan 23 09:53:11 compute-0 ceph-mon[74335]: 11.d scrub ok
Jan 23 09:53:11 compute-0 ceph-mon[74335]: 12.1b deep-scrub starts
Jan 23 09:53:11 compute-0 ceph-mon[74335]: 12.1b deep-scrub ok
Jan 23 09:53:11 compute-0 ceph-mon[74335]: osdmap e78: 3 total, 3 up, 3 in
Jan 23 09:53:11 compute-0 ceph-mon[74335]: pgmap v96: 353 pgs: 4 active+remapped, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s; 148 B/s, 6 objects/s recovering
Jan 23 09:53:11 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 23 09:53:11 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 23 09:53:11 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 79 pg[10.15( v 62'759 (0'0,62'759] local-lis/les=78/79 n=5 ec=59/46 lis/c=76/67 les/c/f=77/68/0 sis=78) [1] r=0 lpr=78 pi=[67,78)/1 crt=62'759 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:11 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 79 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=78/79 n=6 ec=59/46 lis/c=76/67 les/c/f=77/68/0 sis=78) [1] r=0 lpr=78 pi=[67,78)/1 crt=62'768 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:11 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 79 pg[10.5( v 77'776 (0'0,77'776] local-lis/les=78/79 n=8 ec=59/46 lis/c=76/66 les/c/f=77/67/0 sis=78) [1] r=0 lpr=78 pi=[66,78)/1 crt=77'776 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:11 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 79 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=78/79 n=6 ec=59/46 lis/c=76/67 les/c/f=77/68/0 sis=78) [1] r=0 lpr=78 pi=[67,78)/1 crt=62'764 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:11 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 79 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=78/79 n=5 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[67,78)/1 crt=62'763 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:11 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 79 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=78/79 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[67,78)/1 crt=58'754 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:11 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 79 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=78/79 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[67,78)/1 crt=62'764 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:11 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 79 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=78/79 n=6 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[67,78)/1 crt=62'763 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:12 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:12 compute-0 python3[95780]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:53:12 compute-0 ceph-mgr[74633]: [dashboard INFO request] [192.168.122.100:37754] [GET] [200] [0.138s] [6.3K] [f8916da2-c0a5-41b2-ba49-18707c502847] /
Jan 23 09:53:12 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 23 09:53:12 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 23 09:53:12 compute-0 python3[95804]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:53:12 compute-0 ceph-mgr[74633]: [dashboard INFO request] [192.168.122.100:37770] [GET] [200] [0.002s] [6.3K] [1837f9a4-1b21-4f8a-a5f2-0a994346f54e] /
Jan 23 09:53:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 23 09:53:12 compute-0 ceph-mon[74335]: 11.2 scrub starts
Jan 23 09:53:12 compute-0 ceph-mon[74335]: 11.2 scrub ok
Jan 23 09:53:12 compute-0 ceph-mon[74335]: 12.16 scrub starts
Jan 23 09:53:12 compute-0 ceph-mon[74335]: 12.16 scrub ok
Jan 23 09:53:12 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 23 09:53:12 compute-0 ceph-mon[74335]: osdmap e79: 3 total, 3 up, 3 in
Jan 23 09:53:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 23 09:53:12 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 23 09:53:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 80 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=78/79 n=5 ec=59/46 lis/c=78/67 les/c/f=79/68/0 sis=80 pruub=14.990098000s) [0] async=[0] r=-1 lpr=80 pi=[67,80)/1 crt=62'763 mlcod 62'763 active pruub 226.008041382s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 80 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=78/79 n=6 ec=59/46 lis/c=78/67 les/c/f=79/68/0 sis=80 pruub=14.990281105s) [0] async=[0] r=-1 lpr=80 pi=[67,80)/1 crt=62'763 mlcod 62'763 active pruub 226.008285522s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 80 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=78/79 n=4 ec=59/46 lis/c=78/67 les/c/f=79/68/0 sis=80 pruub=14.990010262s) [0] async=[0] r=-1 lpr=80 pi=[67,80)/1 crt=58'754 mlcod 58'754 active pruub 226.008056641s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 80 pg[10.16( v 58'754 (0'0,58'754] local-lis/les=78/79 n=4 ec=59/46 lis/c=78/67 les/c/f=79/68/0 sis=80 pruub=14.989958763s) [0] r=-1 lpr=80 pi=[67,80)/1 crt=58'754 mlcod 0'0 unknown NOTIFY pruub 226.008056641s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 80 pg[10.6( v 62'763 (0'0,62'763] local-lis/les=78/79 n=6 ec=59/46 lis/c=78/67 les/c/f=79/68/0 sis=80 pruub=14.990155220s) [0] r=-1 lpr=80 pi=[67,80)/1 crt=62'763 mlcod 0'0 unknown NOTIFY pruub 226.008285522s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 80 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=78/79 n=6 ec=59/46 lis/c=78/67 les/c/f=79/68/0 sis=80 pruub=14.990141869s) [0] async=[0] r=-1 lpr=80 pi=[67,80)/1 crt=62'764 mlcod 62'764 active pruub 226.008071899s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 80 pg[10.1e( v 62'763 (0'0,62'763] local-lis/les=78/79 n=5 ec=59/46 lis/c=78/67 les/c/f=79/68/0 sis=80 pruub=14.989666939s) [0] r=-1 lpr=80 pi=[67,80)/1 crt=62'763 mlcod 0'0 unknown NOTIFY pruub 226.008041382s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:12 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 80 pg[10.e( v 62'764 (0'0,62'764] local-lis/les=78/79 n=6 ec=59/46 lis/c=78/67 les/c/f=79/68/0 sis=80 pruub=14.989680290s) [0] r=-1 lpr=80 pi=[67,80)/1 crt=62'764 mlcod 0'0 unknown NOTIFY pruub 226.008071899s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 4 active+remapped, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s; 148 B/s, 6 objects/s recovering
Jan 23 09:53:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 23 09:53:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 23 09:53:13 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 23 09:53:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:13 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:13 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 23 09:53:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:53:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:14 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 23 09:53:14 compute-0 ceph-mon[74335]: 11.6 scrub starts
Jan 23 09:53:14 compute-0 ceph-mon[74335]: 11.6 scrub ok
Jan 23 09:53:14 compute-0 ceph-mon[74335]: 10.1c scrub starts
Jan 23 09:53:14 compute-0 ceph-mon[74335]: 10.1c scrub ok
Jan 23 09:53:14 compute-0 ceph-mon[74335]: 12.14 scrub starts
Jan 23 09:53:14 compute-0 ceph-mon[74335]: 12.14 scrub ok
Jan 23 09:53:14 compute-0 ceph-mon[74335]: osdmap e80: 3 total, 3 up, 3 in
Jan 23 09:53:14 compute-0 ceph-mon[74335]: pgmap v99: 353 pgs: 4 active+remapped, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s; 148 B/s, 6 objects/s recovering
Jan 23 09:53:14 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 23 09:53:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 23 09:53:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 23 09:53:14 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 23 09:53:14 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 81 pg[10.18( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:14 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 81 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=81) [1] r=0 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:53:14 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 23 09:53:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:14 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 23 09:53:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:53:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 23 09:53:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Jan 23 09:53:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:14 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 23 09:53:14 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 23 09:53:14 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:14 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:14 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:14 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:14 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.vcrquf on compute-1
Jan 23 09:53:14 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.vcrquf on compute-1
Jan 23 09:53:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 4 peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 3 op/s; 50 B/s, 4 objects/s recovering
Jan 23 09:53:15 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 23 completed events
Jan 23 09:53:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:53:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:15 compute-0 ceph-mgr[74633]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 11.18 scrub starts
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 11.18 scrub ok
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 10.1b scrub starts
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 10.1b scrub ok
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 12.1 deep-scrub starts
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 12.1 deep-scrub ok
Jan 23 09:53:15 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 23 09:53:15 compute-0 ceph-mon[74335]: osdmap e81: 3 total, 3 up, 3 in
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 11.1f scrub starts
Jan 23 09:53:15 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 11.1f scrub ok
Jan 23 09:53:15 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 10.19 scrub starts
Jan 23 09:53:15 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 10.19 scrub ok
Jan 23 09:53:15 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 11.1a scrub starts
Jan 23 09:53:15 compute-0 ceph-mon[74335]: 11.1a scrub ok
Jan 23 09:53:15 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 23 09:53:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 23 09:53:15 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 23 09:53:15 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 82 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=82) [1]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:15 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 82 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=82) [1]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:15 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 82 pg[10.18( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=82) [1]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:15 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 82 pg[10.18( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=82) [1]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:15 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:15 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 23 09:53:15 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 23 09:53:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:15 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:16 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 23 09:53:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 23 09:53:16 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 23 09:53:16 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 23 09:53:16 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:16 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:16 compute-0 ceph-mon[74335]: Deploying daemon keepalived.nfs.cephfs.compute-1.vcrquf on compute-1
Jan 23 09:53:16 compute-0 ceph-mon[74335]: pgmap v101: 353 pgs: 4 peering, 349 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 3 op/s; 50 B/s, 4 objects/s recovering
Jan 23 09:53:16 compute-0 ceph-mon[74335]: osdmap e82: 3 total, 3 up, 3 in
Jan 23 09:53:16 compute-0 ceph-mon[74335]: 11.10 scrub starts
Jan 23 09:53:16 compute-0 ceph-mon[74335]: 11.10 scrub ok
Jan 23 09:53:16 compute-0 ceph-mon[74335]: 8.d scrub starts
Jan 23 09:53:16 compute-0 ceph-mon[74335]: 8.d scrub ok
Jan 23 09:53:16 compute-0 ceph-mon[74335]: 11.1e scrub starts
Jan 23 09:53:16 compute-0 ceph-mon[74335]: 11.1e scrub ok
Jan 23 09:53:16 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 23 09:53:16 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 23 09:53:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 09:53:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 23 09:53:17 compute-0 ceph-mon[74335]: osdmap e83: 3 total, 3 up, 3 in
Jan 23 09:53:17 compute-0 ceph-mon[74335]: 8.a scrub starts
Jan 23 09:53:17 compute-0 ceph-mon[74335]: 8.a scrub ok
Jan 23 09:53:17 compute-0 ceph-mon[74335]: 11.11 scrub starts
Jan 23 09:53:17 compute-0 ceph-mon[74335]: 11.11 scrub ok
Jan 23 09:53:17 compute-0 ceph-mon[74335]: 11.1c scrub starts
Jan 23 09:53:17 compute-0 ceph-mon[74335]: 11.1c scrub ok
Jan 23 09:53:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 23 09:53:17 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 23 09:53:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 84 pg[10.18( v 58'754 (0'0,58'754] local-lis/les=0/0 n=4 ec=59/46 lis/c=82/59 les/c/f=83/60/0 sis=84) [1] r=0 lpr=84 pi=[59,84)/1 luod=0'0 crt=58'754 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 84 pg[10.18( v 58'754 (0'0,58'754] local-lis/les=0/0 n=4 ec=59/46 lis/c=82/59 les/c/f=83/60/0 sis=84) [1] r=0 lpr=84 pi=[59,84)/1 crt=58'754 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 84 pg[10.8( v 62'761 (0'0,62'761] local-lis/les=0/0 n=6 ec=59/46 lis/c=82/59 les/c/f=83/60/0 sis=84) [1] r=0 lpr=84 pi=[59,84)/1 luod=0'0 crt=62'761 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:17 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 84 pg[10.8( v 62'761 (0'0,62'761] local-lis/les=0/0 n=6 ec=59/46 lis/c=82/59 les/c/f=83/60/0 sis=84) [1] r=0 lpr=84 pi=[59,84)/1 crt=62'761 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:17 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:17 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Jan 23 09:53:17 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Jan 23 09:53:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:17 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:18 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc002bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 23 09:53:18 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Jan 23 09:53:18 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Jan 23 09:53:18 compute-0 ceph-mon[74335]: pgmap v104: 353 pgs: 4 peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 09:53:18 compute-0 ceph-mon[74335]: osdmap e84: 3 total, 3 up, 3 in
Jan 23 09:53:18 compute-0 ceph-mon[74335]: 9.7 scrub starts
Jan 23 09:53:18 compute-0 ceph-mon[74335]: 9.7 scrub ok
Jan 23 09:53:18 compute-0 ceph-mon[74335]: 12.10 scrub starts
Jan 23 09:53:18 compute-0 ceph-mon[74335]: 11.1b scrub starts
Jan 23 09:53:18 compute-0 ceph-mon[74335]: 12.10 scrub ok
Jan 23 09:53:18 compute-0 ceph-mon[74335]: 11.1b scrub ok
Jan 23 09:53:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 23 09:53:18 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 23 09:53:18 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 85 pg[10.8( v 62'761 (0'0,62'761] local-lis/les=84/85 n=6 ec=59/46 lis/c=82/59 les/c/f=83/60/0 sis=84) [1] r=0 lpr=84 pi=[59,84)/1 crt=62'761 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:18 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 85 pg[10.18( v 58'754 (0'0,58'754] local-lis/les=84/85 n=4 ec=59/46 lis/c=82/59 les/c/f=83/60/0 sis=84) [1] r=0 lpr=84 pi=[59,84)/1 crt=58'754 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 3 objects/s recovering
Jan 23 09:53:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 23 09:53:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 23 09:53:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:53:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:53:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 23 09:53:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:19 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:19 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:19 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 23 09:53:19 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 23 09:53:19 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:19 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:19 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.lrsdkc on compute-0
Jan 23 09:53:19 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.lrsdkc on compute-0
Jan 23 09:53:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:19 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:19 compute-0 sudo[95806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:53:19 compute-0 sudo[95806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:19 compute-0 sudo[95806]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:19 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.c deep-scrub starts
Jan 23 09:53:19 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.c deep-scrub ok
Jan 23 09:53:19 compute-0 sudo[95831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:53:19 compute-0 sudo[95831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:19 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 23 09:53:19 compute-0 ceph-mon[74335]: 11.a scrub starts
Jan 23 09:53:19 compute-0 ceph-mon[74335]: 11.a scrub ok
Jan 23 09:53:19 compute-0 ceph-mon[74335]: 11.1d deep-scrub starts
Jan 23 09:53:19 compute-0 ceph-mon[74335]: 12.6 scrub starts
Jan 23 09:53:19 compute-0 ceph-mon[74335]: 11.1d deep-scrub ok
Jan 23 09:53:19 compute-0 ceph-mon[74335]: 12.6 scrub ok
Jan 23 09:53:19 compute-0 ceph-mon[74335]: osdmap e85: 3 total, 3 up, 3 in
Jan 23 09:53:19 compute-0 ceph-mon[74335]: pgmap v107: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 3 objects/s recovering
Jan 23 09:53:19 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 23 09:53:19 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:19 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:19 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:19 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:19 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 23 09:53:19 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:19 compute-0 ceph-mon[74335]: Deploying daemon keepalived.nfs.cephfs.compute-0.lrsdkc on compute-0
Jan 23 09:53:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 23 09:53:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 23 09:53:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 23 09:53:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:20 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:20 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event b03cf77f-3380-4939-b7be-1e4403e9e1c9 (Global Recovery Event) in 5 seconds
Jan 23 09:53:20 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Jan 23 09:53:20 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Jan 23 09:53:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 6 objects/s recovering
Jan 23 09:53:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 23 09:53:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 23 09:53:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 23 09:53:21 compute-0 ceph-mon[74335]: 11.4 scrub starts
Jan 23 09:53:21 compute-0 ceph-mon[74335]: 11.4 scrub ok
Jan 23 09:53:21 compute-0 ceph-mon[74335]: 11.16 scrub starts
Jan 23 09:53:21 compute-0 ceph-mon[74335]: 11.16 scrub ok
Jan 23 09:53:21 compute-0 ceph-mon[74335]: 12.c deep-scrub starts
Jan 23 09:53:21 compute-0 ceph-mon[74335]: 12.c deep-scrub ok
Jan 23 09:53:21 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 23 09:53:21 compute-0 ceph-mon[74335]: osdmap e86: 3 total, 3 up, 3 in
Jan 23 09:53:21 compute-0 ceph-mon[74335]: 11.7 scrub starts
Jan 23 09:53:21 compute-0 ceph-mon[74335]: 11.7 scrub ok
Jan 23 09:53:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 23 09:53:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 23 09:53:21 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 23 09:53:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:21 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc002bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:21 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 87 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=67/68 n=7 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=87 pruub=10.852958679s) [0] r=-1 lpr=87 pi=[67,87)/1 crt=62'773 mlcod 0'0 active pruub 230.436737061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:21 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 87 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=67/68 n=7 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=87 pruub=10.852859497s) [0] r=-1 lpr=87 pi=[67,87)/1 crt=62'773 mlcod 0'0 unknown NOTIFY pruub 230.436737061s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:21 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 87 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=87 pruub=10.851963043s) [0] r=-1 lpr=87 pi=[67,87)/1 crt=61'756 mlcod 0'0 active pruub 230.436752319s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:21 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 87 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=87 pruub=10.851899147s) [0] r=-1 lpr=87 pi=[67,87)/1 crt=61'756 mlcod 0'0 unknown NOTIFY pruub 230.436752319s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:21 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.b scrub starts
Jan 23 09:53:21 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.b scrub ok
Jan 23 09:53:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:21 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:22 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 12.12 scrub starts
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 11.8 scrub starts
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 12.12 scrub ok
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 11.8 scrub ok
Jan 23 09:53:22 compute-0 ceph-mon[74335]: pgmap v109: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 6 objects/s recovering
Jan 23 09:53:22 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 23 09:53:22 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 23 09:53:22 compute-0 ceph-mon[74335]: osdmap e87: 3 total, 3 up, 3 in
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 11.f deep-scrub starts
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 11.f deep-scrub ok
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 12.b scrub starts
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 12.b scrub ok
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 11.13 scrub starts
Jan 23 09:53:22 compute-0 ceph-mon[74335]: 11.13 scrub ok
Jan 23 09:53:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 23 09:53:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 23 09:53:22 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 23 09:53:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 88 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=67/68 n=7 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[1] r=0 lpr=88 pi=[67,88)/1 crt=62'773 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 88 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=67/68 n=7 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[1] r=0 lpr=88 pi=[67,88)/1 crt=62'773 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 88 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[1] r=0 lpr=88 pi=[67,88)/1 crt=61'756 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:22 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 88 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[1] r=0 lpr=88 pi=[67,88)/1 crt=61'756 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:22 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.e scrub starts
Jan 23 09:53:22 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.e scrub ok
Jan 23 09:53:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 7 objects/s recovering
Jan 23 09:53:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 23 09:53:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 23 09:53:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 23 09:53:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 23 09:53:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 23 09:53:23 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 23 09:53:23 compute-0 ceph-mon[74335]: osdmap e88: 3 total, 3 up, 3 in
Jan 23 09:53:23 compute-0 ceph-mon[74335]: 11.17 scrub starts
Jan 23 09:53:23 compute-0 ceph-mon[74335]: 11.17 scrub ok
Jan 23 09:53:23 compute-0 ceph-mon[74335]: 12.e scrub starts
Jan 23 09:53:23 compute-0 ceph-mon[74335]: 12.e scrub ok
Jan 23 09:53:23 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 23 09:53:23 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 89 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=88/89 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=61'756 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:23 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 89 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=88/89 n=7 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=62'773 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:23 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:53:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 23 09:53:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 23 09:53:23 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 23 09:53:23 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 90 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=88/89 n=7 ec=59/46 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=15.838144302s) [0] async=[0] r=-1 lpr=90 pi=[67,90)/1 crt=62'773 mlcod 62'773 active pruub 237.510406494s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:23 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 90 pg[10.a( v 62'773 (0'0,62'773] local-lis/les=88/89 n=7 ec=59/46 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=15.837927818s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=62'773 mlcod 0'0 unknown NOTIFY pruub 237.510406494s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:23 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 90 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=88/89 n=4 ec=59/46 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=15.837389946s) [0] async=[0] r=-1 lpr=90 pi=[67,90)/1 crt=61'756 mlcod 61'756 active pruub 237.510391235s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:23 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 90 pg[10.1a( v 61'756 (0'0,61'756] local-lis/les=88/89 n=4 ec=59/46 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=15.837327003s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=61'756 mlcod 0'0 unknown NOTIFY pruub 237.510391235s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:23 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:24 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:24 compute-0 podman[95895]: 2026-01-23 09:53:24.486300824 +0000 UTC m=+4.519677851 container create 7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102 (image=quay.io/ceph/keepalived:2.2.4, name=keen_solomon, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, description=keepalived for Ceph, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., version=2.2.4)
Jan 23 09:53:24 compute-0 podman[95895]: 2026-01-23 09:53:24.469703367 +0000 UTC m=+4.503080424 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 23 09:53:24 compute-0 systemd[1]: Started libpod-conmon-7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102.scope.
Jan 23 09:53:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 23 09:53:24 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:53:24 compute-0 podman[95895]: 2026-01-23 09:53:24.569847027 +0000 UTC m=+4.603224074 container init 7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102 (image=quay.io/ceph/keepalived:2.2.4, name=keen_solomon, build-date=2023-02-22T09:23:20, name=keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, version=2.2.4, description=keepalived for Ceph, vcs-type=git)
Jan 23 09:53:24 compute-0 podman[95895]: 2026-01-23 09:53:24.576927268 +0000 UTC m=+4.610304305 container start 7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102 (image=quay.io/ceph/keepalived:2.2.4, name=keen_solomon, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, release=1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 09:53:24 compute-0 keen_solomon[95990]: 0 0
Jan 23 09:53:24 compute-0 podman[95895]: 2026-01-23 09:53:24.581894563 +0000 UTC m=+4.615271630 container attach 7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102 (image=quay.io/ceph/keepalived:2.2.4, name=keen_solomon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, release=1793, architecture=x86_64, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-type=git, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Jan 23 09:53:24 compute-0 systemd[1]: libpod-7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102.scope: Deactivated successfully.
Jan 23 09:53:24 compute-0 conmon[95990]: conmon 7aeee33d10dba6e3ae07 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102.scope/container/memory.events
Jan 23 09:53:24 compute-0 podman[95895]: 2026-01-23 09:53:24.583691529 +0000 UTC m=+4.617068566 container died 7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102 (image=quay.io/ceph/keepalived:2.2.4, name=keen_solomon, distribution-scope=public, release=1793, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, name=keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Jan 23 09:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c8653db0f603c74a59885cf3170210b21b9c60b542069d7fbddcf57e329e696-merged.mount: Deactivated successfully.
Jan 23 09:53:24 compute-0 podman[95895]: 2026-01-23 09:53:24.622635262 +0000 UTC m=+4.656012289 container remove 7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102 (image=quay.io/ceph/keepalived:2.2.4, name=keen_solomon, com.redhat.component=keepalived-container, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vcs-type=git, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, release=1793)
Jan 23 09:53:24 compute-0 systemd[1]: libpod-conmon-7aeee33d10dba6e3ae07c1869666410a63cf09bfb3a4e9187dc29f71a6d58102.scope: Deactivated successfully.
Jan 23 09:53:24 compute-0 systemd[1]: Reloading.
Jan 23 09:53:24 compute-0 systemd-rc-local-generator[96036]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:24 compute-0 systemd-sysv-generator[96039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:25 compute-0 systemd[1]: Reloading.
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 4 objects/s recovering
Jan 23 09:53:25 compute-0 systemd-rc-local-generator[96076]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:25 compute-0 systemd-sysv-generator[96079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 24 completed events
Jan 23 09:53:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:53:25 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.lrsdkc for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:53:25 compute-0 ceph-mon[74335]: pgmap v112: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 7 objects/s recovering
Jan 23 09:53:25 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 23 09:53:25 compute-0 ceph-mon[74335]: osdmap e89: 3 total, 3 up, 3 in
Jan 23 09:53:25 compute-0 ceph-mon[74335]: 10.16 scrub starts
Jan 23 09:53:25 compute-0 ceph-mon[74335]: 10.16 scrub ok
Jan 23 09:53:25 compute-0 ceph-mon[74335]: 12.1d scrub starts
Jan 23 09:53:25 compute-0 ceph-mon[74335]: 12.1d scrub ok
Jan 23 09:53:25 compute-0 ceph-mon[74335]: osdmap e90: 3 total, 3 up, 3 in
Jan 23 09:53:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 23 09:53:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:25 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:25 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:25 compute-0 podman[96133]: 2026-01-23 09:53:25.522807193 +0000 UTC m=+0.040465602 container create 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4)
Jan 23 09:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f796cc95f534e6a84bd0a7e0228828397ca5e9569b95c8d09d6527edf64deca/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:25 compute-0 podman[96133]: 2026-01-23 09:53:25.57533239 +0000 UTC m=+0.092990829 container init 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, vendor=Red Hat, Inc., version=2.2.4, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 09:53:25 compute-0 podman[96133]: 2026-01-23 09:53:25.581322257 +0000 UTC m=+0.098980666 container start 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, distribution-scope=public, vcs-type=git, com.redhat.component=keepalived-container, name=keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=)
Jan 23 09:53:25 compute-0 bash[96133]: 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609
Jan 23 09:53:25 compute-0 podman[96133]: 2026-01-23 09:53:25.504526874 +0000 UTC m=+0.022185313 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 23 09:53:25 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.lrsdkc for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:25 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:25 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:25 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:25 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:25 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:25 2026: Starting VRRP child process, pid=4
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:25 2026: Startup complete
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:25 2026: (VI_0) Entering BACKUP STATE (init)
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:25 2026: VRRP_Script(check_backend) succeeded
Jan 23 09:53:25 compute-0 sudo[95831]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:53:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:53:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 23 09:53:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.pawaai on compute-2
Jan 23 09:53:25 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.pawaai on compute-2
Jan 23 09:53:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:25 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:26 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 23 09:53:26 compute-0 ceph-mon[74335]: 10.0 scrub starts
Jan 23 09:53:26 compute-0 ceph-mon[74335]: 10.0 scrub ok
Jan 23 09:53:26 compute-0 ceph-mon[74335]: 12.1e scrub starts
Jan 23 09:53:26 compute-0 ceph-mon[74335]: 12.1e scrub ok
Jan 23 09:53:26 compute-0 ceph-mon[74335]: pgmap v115: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 4 objects/s recovering
Jan 23 09:53:26 compute-0 ceph-mon[74335]: 10.f scrub starts
Jan 23 09:53:26 compute-0 ceph-mon[74335]: 10.f scrub ok
Jan 23 09:53:26 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:26 compute-0 ceph-mon[74335]: osdmap e91: 3 total, 3 up, 3 in
Jan 23 09:53:26 compute-0 ceph-mon[74335]: 12.2 scrub starts
Jan 23 09:53:26 compute-0 ceph-mon[74335]: 12.2 scrub ok
Jan 23 09:53:26 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:26 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:26 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 23 09:53:27 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 23 09:53:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 1 active+recovering+remapped, 1 active+remapped, 2 peering, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 2/221 objects misplaced (0.905%); 137 B/s, 5 objects/s recovering
Jan 23 09:53:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:27 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:27 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 23 09:53:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:28 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 23 09:53:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:29 2026: (VI_0) Entering MASTER STATE
Jan 23 09:53:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:29 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:29 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.a scrub starts
Jan 23 09:53:29 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.a scrub ok
Jan 23 09:53:29 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:29 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:29 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 23 09:53:29 compute-0 ceph-mon[74335]: Deploying daemon keepalived.nfs.cephfs.compute-2.pawaai on compute-2
Jan 23 09:53:29 compute-0 ceph-mon[74335]: 10.e scrub starts
Jan 23 09:53:29 compute-0 ceph-mon[74335]: 10.e scrub ok
Jan 23 09:53:29 compute-0 ceph-mon[74335]: 12.3 scrub starts
Jan 23 09:53:29 compute-0 ceph-mon[74335]: 12.3 scrub ok
Jan 23 09:53:29 compute-0 ceph-mon[74335]: osdmap e92: 3 total, 3 up, 3 in
Jan 23 09:53:29 compute-0 ceph-mon[74335]: pgmap v118: 353 pgs: 1 active+recovering+remapped, 1 active+remapped, 2 peering, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 2/221 objects misplaced (0.905%); 137 B/s, 5 objects/s recovering
Jan 23 09:53:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:29 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 23 09:53:29 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 23 09:53:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:30 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:30 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.8 deep-scrub starts
Jan 23 09:53:30 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.8 deep-scrub ok
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 10.6 deep-scrub starts
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 10.6 deep-scrub ok
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 12.1a scrub starts
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 12.1a scrub ok
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 11.1 scrub starts
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 11.1 scrub ok
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 12.9 scrub starts
Jan 23 09:53:30 compute-0 ceph-mon[74335]: pgmap v119: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 11.12 deep-scrub starts
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 11.12 deep-scrub ok
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 12.9 scrub ok
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 12.4 scrub starts
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 12.4 scrub ok
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 12.a scrub starts
Jan 23 09:53:30 compute-0 ceph-mon[74335]: 12.a scrub ok
Jan 23 09:53:30 compute-0 ceph-mon[74335]: osdmap e93: 3 total, 3 up, 3 in
Jan 23 09:53:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:31 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:31 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Jan 23 09:53:31 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Jan 23 09:53:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:31 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:31 compute-0 ceph-mon[74335]: 11.5 scrub starts
Jan 23 09:53:31 compute-0 ceph-mon[74335]: 11.5 scrub ok
Jan 23 09:53:31 compute-0 ceph-mon[74335]: 12.7 scrub starts
Jan 23 09:53:31 compute-0 ceph-mon[74335]: 12.7 scrub ok
Jan 23 09:53:31 compute-0 ceph-mon[74335]: 12.8 deep-scrub starts
Jan 23 09:53:31 compute-0 ceph-mon[74335]: 12.8 deep-scrub ok
Jan 23 09:53:31 compute-0 ceph-mon[74335]: pgmap v121: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:31 compute-0 ceph-mon[74335]: 11.14 deep-scrub starts
Jan 23 09:53:31 compute-0 ceph-mon[74335]: 11.14 deep-scrub ok
Jan 23 09:53:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:32 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:53:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:53:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 23 09:53:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:32 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev d358dc74-4710-4dba-83e4-bef606d6850f (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 23 09:53:32 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event d358dc74-4710-4dba-83e4-bef606d6850f (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 36 seconds
Jan 23 09:53:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 23 09:53:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:32 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 2d232966-1747-4439-9384-593746669617 (Updating alertmanager deployment (+1 -> 1))
Jan 23 09:53:32 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Jan 23 09:53:32 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Jan 23 09:53:32 compute-0 sudo[96159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:53:32 compute-0 sudo[96159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:32 compute-0 sudo[96159]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:32 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Jan 23 09:53:32 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Jan 23 09:53:32 compute-0 sudo[96184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:53:32 compute-0 sudo[96184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:53:32
Jan 23 09:53:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:53:32 compute-0 ceph-mgr[74633]: [balancer INFO root] Some PGs (0.005666) are inactive; try again later
Jan 23 09:53:32 compute-0 ceph-mon[74335]: 11.19 deep-scrub starts
Jan 23 09:53:32 compute-0 ceph-mon[74335]: 11.19 deep-scrub ok
Jan 23 09:53:32 compute-0 ceph-mon[74335]: 12.1c scrub starts
Jan 23 09:53:32 compute-0 ceph-mon[74335]: 12.1c scrub ok
Jan 23 09:53:32 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:32 compute-0 ceph-mon[74335]: 10.1e scrub starts
Jan 23 09:53:32 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:32 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:32 compute-0 ceph-mon[74335]: 10.1e scrub ok
Jan 23 09:53:32 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:32 compute-0 ceph-mon[74335]: Deploying daemon alertmanager.compute-0 on compute-0
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:53:33 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:53:33 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 23 09:53:33 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 23 09:53:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:33 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:53:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:33 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:34 compute-0 ceph-mon[74335]: 12.17 scrub starts
Jan 23 09:53:34 compute-0 ceph-mon[74335]: 12.17 scrub ok
Jan 23 09:53:34 compute-0 ceph-mon[74335]: 12.19 scrub starts
Jan 23 09:53:34 compute-0 ceph-mon[74335]: 12.19 scrub ok
Jan 23 09:53:34 compute-0 ceph-mon[74335]: pgmap v122: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:34 compute-0 ceph-mon[74335]: 10.1f scrub starts
Jan 23 09:53:34 compute-0 ceph-mon[74335]: 10.1f scrub ok
Jan 23 09:53:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:34 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:34 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 23 09:53:34 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 23 09:53:34 compute-0 sshd-session[96355]: Accepted publickey for zuul from 192.168.122.30 port 44424 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:53:34 compute-0 systemd-logind[784]: New session 37 of user zuul.
Jan 23 09:53:34 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 23 09:53:34 compute-0 sshd-session[96355]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:53:34 compute-0 podman[96247]: 2026-01-23 09:53:34.692181863 +0000 UTC m=+1.879245910 volume create 297d351d00c87f5cab2d8bc07f299af1ec5033ff6e558a4bc47e75296e5f66cd
Jan 23 09:53:34 compute-0 podman[96247]: 2026-01-23 09:53:34.699885494 +0000 UTC m=+1.886949541 container create 998306a8db7bc2d562b9eb255d93211b7a3a3af9b67af9dc75fe340117f69ac2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=clever_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:34 compute-0 systemd[90024]: Starting Mark boot as successful...
Jan 23 09:53:34 compute-0 podman[96247]: 2026-01-23 09:53:34.676266948 +0000 UTC m=+1.863331015 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 23 09:53:34 compute-0 systemd[90024]: Finished Mark boot as successful.
Jan 23 09:53:34 compute-0 systemd[1]: Started libpod-conmon-998306a8db7bc2d562b9eb255d93211b7a3a3af9b67af9dc75fe340117f69ac2.scope.
Jan 23 09:53:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:53:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8df28accfbf048fe2b2baac2bc822836c41ea3b3471adbc421a4ccb750fe0d5/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:34 compute-0 podman[96247]: 2026-01-23 09:53:34.787271697 +0000 UTC m=+1.974335764 container init 998306a8db7bc2d562b9eb255d93211b7a3a3af9b67af9dc75fe340117f69ac2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=clever_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:34 compute-0 podman[96247]: 2026-01-23 09:53:34.794480461 +0000 UTC m=+1.981544508 container start 998306a8db7bc2d562b9eb255d93211b7a3a3af9b67af9dc75fe340117f69ac2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=clever_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:34 compute-0 clever_benz[96395]: 65534 65534
Jan 23 09:53:34 compute-0 systemd[1]: libpod-998306a8db7bc2d562b9eb255d93211b7a3a3af9b67af9dc75fe340117f69ac2.scope: Deactivated successfully.
Jan 23 09:53:34 compute-0 podman[96247]: 2026-01-23 09:53:34.799614761 +0000 UTC m=+1.986678828 container attach 998306a8db7bc2d562b9eb255d93211b7a3a3af9b67af9dc75fe340117f69ac2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=clever_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:34 compute-0 podman[96247]: 2026-01-23 09:53:34.800004253 +0000 UTC m=+1.987068300 container died 998306a8db7bc2d562b9eb255d93211b7a3a3af9b67af9dc75fe340117f69ac2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=clever_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8df28accfbf048fe2b2baac2bc822836c41ea3b3471adbc421a4ccb750fe0d5-merged.mount: Deactivated successfully.
Jan 23 09:53:34 compute-0 podman[96247]: 2026-01-23 09:53:34.839600747 +0000 UTC m=+2.026664794 container remove 998306a8db7bc2d562b9eb255d93211b7a3a3af9b67af9dc75fe340117f69ac2 (image=quay.io/prometheus/alertmanager:v0.25.0, name=clever_benz, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:34 compute-0 podman[96247]: 2026-01-23 09:53:34.844909583 +0000 UTC m=+2.031973630 volume remove 297d351d00c87f5cab2d8bc07f299af1ec5033ff6e558a4bc47e75296e5f66cd
Jan 23 09:53:34 compute-0 systemd[1]: libpod-conmon-998306a8db7bc2d562b9eb255d93211b7a3a3af9b67af9dc75fe340117f69ac2.scope: Deactivated successfully.
Jan 23 09:53:34 compute-0 podman[96454]: 2026-01-23 09:53:34.91094259 +0000 UTC m=+0.040484962 volume create 422eec9978968063730d117447aaba2b8a35c25753b4377faa334e030c8dbdc1
Jan 23 09:53:34 compute-0 podman[96454]: 2026-01-23 09:53:34.920767667 +0000 UTC m=+0.050310029 container create a59f9aa53a72206957da5b829aaf9f93131c0a4246f1d25b0d3bd3ff20b5ced7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:34 compute-0 systemd[1]: Started libpod-conmon-a59f9aa53a72206957da5b829aaf9f93131c0a4246f1d25b0d3bd3ff20b5ced7.scope.
Jan 23 09:53:34 compute-0 podman[96454]: 2026-01-23 09:53:34.895332134 +0000 UTC m=+0.024874556 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 23 09:53:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0c2cbdc8ee65de0a8f6d9b9122c771f909aa096dd7583981671f7cb44a5777/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:35 compute-0 podman[96454]: 2026-01-23 09:53:35.030085213 +0000 UTC m=+0.159627605 container init a59f9aa53a72206957da5b829aaf9f93131c0a4246f1d25b0d3bd3ff20b5ced7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:35 compute-0 ceph-mon[74335]: 12.11 scrub starts
Jan 23 09:53:35 compute-0 ceph-mon[74335]: 12.11 scrub ok
Jan 23 09:53:35 compute-0 ceph-mon[74335]: 10.15 scrub starts
Jan 23 09:53:35 compute-0 ceph-mon[74335]: 10.15 scrub ok
Jan 23 09:53:35 compute-0 ceph-mon[74335]: 10.10 scrub starts
Jan 23 09:53:35 compute-0 ceph-mon[74335]: 10.10 scrub ok
Jan 23 09:53:35 compute-0 podman[96454]: 2026-01-23 09:53:35.0357756 +0000 UTC m=+0.165317962 container start a59f9aa53a72206957da5b829aaf9f93131c0a4246f1d25b0d3bd3ff20b5ced7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:35 compute-0 blissful_borg[96470]: 65534 65534
Jan 23 09:53:35 compute-0 systemd[1]: libpod-a59f9aa53a72206957da5b829aaf9f93131c0a4246f1d25b0d3bd3ff20b5ced7.scope: Deactivated successfully.
Jan 23 09:53:35 compute-0 podman[96454]: 2026-01-23 09:53:35.041627803 +0000 UTC m=+0.171170195 container attach a59f9aa53a72206957da5b829aaf9f93131c0a4246f1d25b0d3bd3ff20b5ced7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:35 compute-0 podman[96454]: 2026-01-23 09:53:35.041910442 +0000 UTC m=+0.171452814 container died a59f9aa53a72206957da5b829aaf9f93131c0a4246f1d25b0d3bd3ff20b5ced7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f0c2cbdc8ee65de0a8f6d9b9122c771f909aa096dd7583981671f7cb44a5777-merged.mount: Deactivated successfully.
Jan 23 09:53:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 23 09:53:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 23 09:53:35 compute-0 podman[96454]: 2026-01-23 09:53:35.080402811 +0000 UTC m=+0.209945183 container remove a59f9aa53a72206957da5b829aaf9f93131c0a4246f1d25b0d3bd3ff20b5ced7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_borg, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:35 compute-0 podman[96454]: 2026-01-23 09:53:35.084909161 +0000 UTC m=+0.214451543 volume remove 422eec9978968063730d117447aaba2b8a35c25753b4377faa334e030c8dbdc1
Jan 23 09:53:35 compute-0 systemd[1]: libpod-conmon-a59f9aa53a72206957da5b829aaf9f93131c0a4246f1d25b0d3bd3ff20b5ced7.scope: Deactivated successfully.
Jan 23 09:53:35 compute-0 systemd[1]: Reloading.
Jan 23 09:53:35 compute-0 systemd-rc-local-generator[96528]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:35 compute-0 systemd-sysv-generator[96538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:35 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 25 completed events
Jan 23 09:53:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:53:35 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 23 09:53:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:35 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 11c5f6e9-b880-4337-8e3a-632bb379b69b (Global Recovery Event) in 10 seconds
Jan 23 09:53:35 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 23 09:53:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:35 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:35 compute-0 systemd[1]: Reloading.
Jan 23 09:53:35 compute-0 systemd-rc-local-generator[96653]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:35 compute-0 systemd-sysv-generator[96656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:35 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:35 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:53:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:53:35 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Jan 23 09:53:35 compute-0 python3.9[96626]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:53:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 23 09:53:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 23 09:53:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 23 09:53:36 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 23 09:53:36 compute-0 ceph-mon[74335]: 12.13 scrub starts
Jan 23 09:53:36 compute-0 ceph-mon[74335]: 12.13 scrub ok
Jan 23 09:53:36 compute-0 ceph-mon[74335]: 10.12 scrub starts
Jan 23 09:53:36 compute-0 ceph-mon[74335]: 10.12 scrub ok
Jan 23 09:53:36 compute-0 ceph-mon[74335]: pgmap v123: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 23 09:53:36 compute-0 ceph-mon[74335]: 10.9 scrub starts
Jan 23 09:53:36 compute-0 ceph-mon[74335]: 10.9 scrub ok
Jan 23 09:53:36 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:36 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc8002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:36 compute-0 podman[96719]: 2026-01-23 09:53:36.172660468 +0000 UTC m=+0.041568876 volume create b5efb888005db14847d71c66c95453de688fc023ad156b0e822ac1aa28a81a46
Jan 23 09:53:36 compute-0 podman[96719]: 2026-01-23 09:53:36.182875116 +0000 UTC m=+0.051783524 container create c12cd358f71085f8f02219ac258799ba47dc04ec4aa13a22c98c7af3dc91dab0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d29d3a0b823fda93b77374ab3157f110bd53122e2065aced00e2ab5bc7e469/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d29d3a0b823fda93b77374ab3157f110bd53122e2065aced00e2ab5bc7e469/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:36 compute-0 podman[96719]: 2026-01-23 09:53:36.236029933 +0000 UTC m=+0.104938361 container init c12cd358f71085f8f02219ac258799ba47dc04ec4aa13a22c98c7af3dc91dab0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:36 compute-0 podman[96719]: 2026-01-23 09:53:36.241377899 +0000 UTC m=+0.110286317 container start c12cd358f71085f8f02219ac258799ba47dc04ec4aa13a22c98c7af3dc91dab0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:53:36 compute-0 bash[96719]: c12cd358f71085f8f02219ac258799ba47dc04ec4aa13a22c98c7af3dc91dab0
Jan 23 09:53:36 compute-0 podman[96719]: 2026-01-23 09:53:36.15795026 +0000 UTC m=+0.026858688 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 23 09:53:36 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:53:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:36.285Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 23 09:53:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:36.285Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 23 09:53:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:36.295Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Jan 23 09:53:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:36.298Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 23 09:53:36 compute-0 sudo[96184]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:53:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:36.331Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 23 09:53:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:36.332Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 23 09:53:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:36.339Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 23 09:53:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:36.339Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 23 09:53:36 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 23 09:53:36 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 23 09:53:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:53:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 23 09:53:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 23 09:53:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 23 09:53:37 compute-0 sudo[96957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvjynhpyhkwbyobymwbpdyggnhzutfle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162016.9270146-51-229545628140754/AnsiballZ_command.py'
Jan 23 09:53:37 compute-0 sudo[96957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:53:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:37 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c98000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:37 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 23 09:53:37 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 23 09:53:37 compute-0 python3.9[96959]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:53:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:37 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 23 09:53:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:38 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:38.298Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000355653s
Jan 23 09:53:38 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 23 09:53:38 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 23 09:53:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 23 09:53:38 compute-0 ceph-mon[74335]: 12.18 scrub starts
Jan 23 09:53:38 compute-0 ceph-mon[74335]: 10.d scrub starts
Jan 23 09:53:38 compute-0 ceph-mon[74335]: 12.18 scrub ok
Jan 23 09:53:38 compute-0 ceph-mon[74335]: 10.d scrub ok
Jan 23 09:53:38 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 23 09:53:38 compute-0 ceph-mon[74335]: osdmap e94: 3 total, 3 up, 3 in
Jan 23 09:53:38 compute-0 ceph-mon[74335]: 10.a scrub starts
Jan 23 09:53:38 compute-0 ceph-mon[74335]: 10.a scrub ok
Jan 23 09:53:38 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:38 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 23 09:53:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:38 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 2d232966-1747-4439-9384-593746669617 (Updating alertmanager deployment (+1 -> 1))
Jan 23 09:53:38 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 2d232966-1747-4439-9384-593746669617 (Updating alertmanager deployment (+1 -> 1)) in 6 seconds
Jan 23 09:53:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 23 09:53:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:53:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:38 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev c7c0a2f4-2925-445b-b252-c747df94ca5a (Updating grafana deployment (+1 -> 1))
Jan 23 09:53:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Jan 23 09:53:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Jan 23 09:53:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Jan 23 09:53:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Jan 23 09:53:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Jan 23 09:53:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 23 09:53:38 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 23 09:53:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Jan 23 09:53:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Jan 23 09:53:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Jan 23 09:53:38 compute-0 sudo[96971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:53:38 compute-0 sudo[96971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:38 compute-0 sudo[96971]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:38 compute-0 sudo[96996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:53:38 compute-0 sudo[96996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 23 09:53:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 23 09:53:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:39 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc8002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:39 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 23 09:53:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 11.3 scrub starts
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 11.3 scrub ok
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.8 scrub starts
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.8 scrub ok
Jan 23 09:53:39 compute-0 ceph-mon[74335]: pgmap v125: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.b deep-scrub starts
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.b deep-scrub ok
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 11.e scrub starts
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 11.e scrub ok
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.2 scrub starts
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.2 scrub ok
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.1a scrub starts
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.1a scrub ok
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.18 scrub starts
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.4 deep-scrub starts
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.18 scrub ok
Jan 23 09:53:39 compute-0 ceph-mon[74335]: 10.4 deep-scrub ok
Jan 23 09:53:39 compute-0 ceph-mon[74335]: osdmap e95: 3 total, 3 up, 3 in
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:39 compute-0 ceph-mon[74335]: Regenerating cephadm self-signed grafana TLS certificates
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:39 compute-0 ceph-mon[74335]: Deploying daemon grafana.compute-0 on compute-0
Jan 23 09:53:39 compute-0 ceph-mon[74335]: pgmap v127: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:39 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 23 09:53:39 compute-0 ceph-osd[82641]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 23 09:53:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 09:53:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 09:53:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 23 09:53:39 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 23 09:53:39 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 96 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=78/79 n=8 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=96 pruub=12.336092949s) [0] r=-1 lpr=96 pi=[78,96)/1 crt=62'768 mlcod 0'0 active pruub 249.997528076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:39 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 96 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=78/79 n=8 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=96 pruub=12.336036682s) [0] r=-1 lpr=96 pi=[78,96)/1 crt=62'768 mlcod 0'0 unknown NOTIFY pruub 249.997528076s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:39 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 96 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=78/79 n=5 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=96 pruub=12.334946632s) [0] r=-1 lpr=96 pi=[78,96)/1 crt=62'764 mlcod 0'0 active pruub 249.997528076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:39 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 96 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=78/79 n=5 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=96 pruub=12.334927559s) [0] r=-1 lpr=96 pi=[78,96)/1 crt=62'764 mlcod 0'0 unknown NOTIFY pruub 249.997528076s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:39 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 09:53:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:40 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:40 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 27 completed events
Jan 23 09:53:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:53:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 23 09:53:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 23 09:53:40 compute-0 ceph-mon[74335]: 10.1 scrub starts
Jan 23 09:53:40 compute-0 ceph-mon[74335]: 10.1d scrub starts
Jan 23 09:53:40 compute-0 ceph-mon[74335]: 10.1d scrub ok
Jan 23 09:53:40 compute-0 ceph-mon[74335]: 10.1 scrub ok
Jan 23 09:53:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 09:53:40 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 09:53:40 compute-0 ceph-mon[74335]: osdmap e96: 3 total, 3 up, 3 in
Jan 23 09:53:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 97 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=78/79 n=8 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=97) [0]/[1] r=0 lpr=97 pi=[78,97)/1 crt=62'768 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 97 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=78/79 n=8 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=97) [0]/[1] r=0 lpr=97 pi=[78,97)/1 crt=62'768 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 97 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=78/79 n=5 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=97) [0]/[1] r=0 lpr=97 pi=[78,97)/1 crt=62'764 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:40 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 97 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=78/79 n=5 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=97) [0]/[1] r=0 lpr=97 pi=[78,97)/1 crt=62'764 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:40 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 23 09:53:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 23 09:53:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 23 09:53:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:41 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:41 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc8002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 23 09:53:42 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 23 09:53:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 23 09:53:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 23 09:53:42 compute-0 ceph-mon[74335]: 10.13 scrub starts
Jan 23 09:53:42 compute-0 ceph-mon[74335]: 10.13 scrub ok
Jan 23 09:53:42 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:42 compute-0 ceph-mon[74335]: osdmap e97: 3 total, 3 up, 3 in
Jan 23 09:53:42 compute-0 ceph-mon[74335]: pgmap v130: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:42 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.061081) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162022061414, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 8011, "num_deletes": 252, "total_data_size": 13414099, "memory_usage": 13989512, "flush_reason": "Manual Compaction"}
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 23 09:53:42 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 98 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=97/98 n=8 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=97) [0]/[1] async=[0] r=0 lpr=97 pi=[78,97)/1 crt=62'768 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:42 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 98 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=97/98 n=5 ec=59/46 lis/c=78/78 les/c/f=79/79/0 sis=97) [0]/[1] async=[0] r=0 lpr=97 pi=[78,97)/1 crt=62'764 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:53:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:42 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162022379131, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11436195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 153, "largest_seqno": 8155, "table_properties": {"data_size": 11406213, "index_size": 19179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9733, "raw_key_size": 94252, "raw_average_key_size": 24, "raw_value_size": 11332284, "raw_average_value_size": 2930, "num_data_blocks": 846, "num_entries": 3867, "num_filter_entries": 3867, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161662, "oldest_key_time": 1769161662, "file_creation_time": 1769162022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 318116 microseconds, and 215944 cpu microseconds.
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.379250) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11436195 bytes OK
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.379315) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.383269) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.383313) EVENT_LOG_v1 {"time_micros": 1769162022383305, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.383341) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13377366, prev total WAL file size 13377366, number of live WAL files 2.
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.385960) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(10MB) 13(58KB) 8(1944B)]
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162022386231, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 11497915, "oldest_snapshot_seqno": -1}
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3685 keys, 11450930 bytes, temperature: kUnknown
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162022494754, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 11450930, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11421401, "index_size": 19243, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 92372, "raw_average_key_size": 25, "raw_value_size": 11348994, "raw_average_value_size": 3079, "num_data_blocks": 850, "num_entries": 3685, "num_filter_entries": 3685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769162022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.495104) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 11450930 bytes
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.498385) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.9 rd, 105.4 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.0, 0.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3979, records dropped: 294 output_compression: NoCompression
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.498416) EVENT_LOG_v1 {"time_micros": 1769162022498403, "job": 4, "event": "compaction_finished", "compaction_time_micros": 108617, "compaction_time_cpu_micros": 41815, "output_level": 6, "num_output_files": 1, "total_output_size": 11450930, "num_input_records": 3979, "num_output_records": 3685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162022500036, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162022500104, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162022500194, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 23 09:53:42 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:53:42.385720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:53:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 23 09:53:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 23 09:53:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 23 09:53:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:43 compute-0 ceph-mon[74335]: 10.11 scrub starts
Jan 23 09:53:43 compute-0 ceph-mon[74335]: 10.11 scrub ok
Jan 23 09:53:43 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 23 09:53:43 compute-0 ceph-mon[74335]: osdmap e98: 3 total, 3 up, 3 in
Jan 23 09:53:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 23 09:53:43 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 23 09:53:43 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 99 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=97/98 n=8 ec=59/46 lis/c=97/78 les/c/f=98/79/0 sis=99 pruub=14.902777672s) [0] async=[0] r=-1 lpr=99 pi=[78,99)/1 crt=62'768 mlcod 62'768 active pruub 256.179931641s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:43 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 99 pg[10.d( v 62'768 (0'0,62'768] local-lis/les=97/98 n=8 ec=59/46 lis/c=97/78 les/c/f=98/79/0 sis=99 pruub=14.902626038s) [0] r=-1 lpr=99 pi=[78,99)/1 crt=62'768 mlcod 0'0 unknown NOTIFY pruub 256.179931641s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:43 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 99 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=97/98 n=5 ec=59/46 lis/c=97/78 les/c/f=98/79/0 sis=99 pruub=14.901392937s) [0] async=[0] r=-1 lpr=99 pi=[78,99)/1 crt=62'764 mlcod 62'764 active pruub 256.179992676s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:43 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 99 pg[10.1d( v 62'764 (0'0,62'764] local-lis/les=97/98 n=5 ec=59/46 lis/c=97/78 les/c/f=98/79/0 sis=99 pruub=14.901340485s) [0] r=-1 lpr=99 pi=[78,99)/1 crt=62'764 mlcod 0'0 unknown NOTIFY pruub 256.179992676s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:43 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:53:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:43 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:44 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:44 compute-0 ceph-mon[74335]: 10.3 scrub starts
Jan 23 09:53:44 compute-0 ceph-mon[74335]: 10.3 scrub ok
Jan 23 09:53:44 compute-0 ceph-mon[74335]: pgmap v132: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:44 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 23 09:53:44 compute-0 ceph-mon[74335]: osdmap e99: 3 total, 3 up, 3 in
Jan 23 09:53:44 compute-0 ceph-mon[74335]: 10.14 scrub starts
Jan 23 09:53:44 compute-0 ceph-mon[74335]: 10.14 scrub ok
Jan 23 09:53:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 23 09:53:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 23 09:53:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 23 09:53:44 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 23 09:53:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 09:53:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 23 09:53:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 23 09:53:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 23 09:53:45 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 23 09:53:45 compute-0 ceph-mon[74335]: osdmap e100: 3 total, 3 up, 3 in
Jan 23 09:53:45 compute-0 ceph-mon[74335]: 10.c scrub starts
Jan 23 09:53:45 compute-0 ceph-mon[74335]: 10.c scrub ok
Jan 23 09:53:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:45 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc8002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:45 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:45 compute-0 ceph-mgr[74633]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 23 09:53:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:46 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:53:46.301Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003221685s
Jan 23 09:53:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 23 09:53:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 23 09:53:47 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 23 09:53:47 compute-0 ceph-mon[74335]: pgmap v135: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 09:53:47 compute-0 ceph-mon[74335]: osdmap e101: 3 total, 3 up, 3 in
Jan 23 09:53:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:47 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:47 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc80091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:48 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:49 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c98002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:49 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:50 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc80091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:51 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:51 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c98002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:52 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:53 compute-0 ceph-mds[94628]: mds.beacon.cephfs.compute-0.ymknms missed beacon ack from the monitors
Jan 23 09:53:53 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.285976887s
Jan 23 09:53:53 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.285976887s
Jan 23 09:53:53 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.286514759s, txc = 0x55c0a8d14300
Jan 23 09:53:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 23 09:53:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 23 09:53:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 23 09:53:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 23 09:53:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 23 09:53:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 23 09:53:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 23 09:53:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:53 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 09:53:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 09:53:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 09:53:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 23 09:53:53 compute-0 ceph-mon[74335]: pgmap v137: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:53 compute-0 ceph-mon[74335]: osdmap e102: 3 total, 3 up, 3 in
Jan 23 09:53:53 compute-0 ceph-mon[74335]: pgmap v139: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:53 compute-0 ceph-mon[74335]: pgmap v140: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:53 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 23 09:53:53 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 23 09:53:53 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 23 09:53:53 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 23 09:53:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:53 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:54 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 23 09:53:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 23 09:53:54 compute-0 ceph-mon[74335]: pgmap v141: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:54 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 09:53:54 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 09:53:54 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 09:53:54 compute-0 ceph-mon[74335]: osdmap e103: 3 total, 3 up, 3 in
Jan 23 09:53:54 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 23 09:53:54 compute-0 podman[97061]: 2026-01-23 09:53:54.641115858 +0000 UTC m=+15.202852710 container create 717d3563f5d1412ca9ba06b7d693e550c987a647a7989a24b1bb420c9bd9c7ae (image=quay.io/ceph/grafana:10.4.0, name=sad_germain, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 systemd[1]: Started libpod-conmon-717d3563f5d1412ca9ba06b7d693e550c987a647a7989a24b1bb420c9bd9c7ae.scope.
Jan 23 09:53:54 compute-0 podman[97061]: 2026-01-23 09:53:54.615115985 +0000 UTC m=+15.176852877 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 23 09:53:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:53:54 compute-0 podman[97061]: 2026-01-23 09:53:54.739148766 +0000 UTC m=+15.300885648 container init 717d3563f5d1412ca9ba06b7d693e550c987a647a7989a24b1bb420c9bd9c7ae (image=quay.io/ceph/grafana:10.4.0, name=sad_germain, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 podman[97061]: 2026-01-23 09:53:54.750085396 +0000 UTC m=+15.311822248 container start 717d3563f5d1412ca9ba06b7d693e550c987a647a7989a24b1bb420c9bd9c7ae (image=quay.io/ceph/grafana:10.4.0, name=sad_germain, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 sad_germain[97292]: 472 0
Jan 23 09:53:54 compute-0 systemd[1]: libpod-717d3563f5d1412ca9ba06b7d693e550c987a647a7989a24b1bb420c9bd9c7ae.scope: Deactivated successfully.
Jan 23 09:53:54 compute-0 podman[97061]: 2026-01-23 09:53:54.756953504 +0000 UTC m=+15.318690376 container attach 717d3563f5d1412ca9ba06b7d693e550c987a647a7989a24b1bb420c9bd9c7ae (image=quay.io/ceph/grafana:10.4.0, name=sad_germain, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 podman[97061]: 2026-01-23 09:53:54.757771207 +0000 UTC m=+15.319508069 container died 717d3563f5d1412ca9ba06b7d693e550c987a647a7989a24b1bb420c9bd9c7ae (image=quay.io/ceph/grafana:10.4.0, name=sad_germain, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4818211066d8ac1baa3c306310798c109dc62564c2140c5a4473a9013efcc869-merged.mount: Deactivated successfully.
Jan 23 09:53:54 compute-0 podman[97061]: 2026-01-23 09:53:54.813454393 +0000 UTC m=+15.375191245 container remove 717d3563f5d1412ca9ba06b7d693e550c987a647a7989a24b1bb420c9bd9c7ae (image=quay.io/ceph/grafana:10.4.0, name=sad_germain, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 systemd[1]: libpod-conmon-717d3563f5d1412ca9ba06b7d693e550c987a647a7989a24b1bb420c9bd9c7ae.scope: Deactivated successfully.
Jan 23 09:53:54 compute-0 podman[97311]: 2026-01-23 09:53:54.896687146 +0000 UTC m=+0.055623427 container create 7d75bc90952803e2c2d664208bf2799efbdeb3861b6b5ab41e178ce62ff3b17f (image=quay.io/ceph/grafana:10.4.0, name=frosty_taussig, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 systemd[1]: Started libpod-conmon-7d75bc90952803e2c2d664208bf2799efbdeb3861b6b5ab41e178ce62ff3b17f.scope.
Jan 23 09:53:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:53:54 compute-0 podman[97311]: 2026-01-23 09:53:54.873013187 +0000 UTC m=+0.031949488 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 23 09:53:54 compute-0 podman[97311]: 2026-01-23 09:53:54.974309974 +0000 UTC m=+0.133246285 container init 7d75bc90952803e2c2d664208bf2799efbdeb3861b6b5ab41e178ce62ff3b17f (image=quay.io/ceph/grafana:10.4.0, name=frosty_taussig, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 podman[97311]: 2026-01-23 09:53:54.982111818 +0000 UTC m=+0.141048099 container start 7d75bc90952803e2c2d664208bf2799efbdeb3861b6b5ab41e178ce62ff3b17f (image=quay.io/ceph/grafana:10.4.0, name=frosty_taussig, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 frosty_taussig[97333]: 472 0
Jan 23 09:53:54 compute-0 systemd[1]: libpod-7d75bc90952803e2c2d664208bf2799efbdeb3861b6b5ab41e178ce62ff3b17f.scope: Deactivated successfully.
Jan 23 09:53:54 compute-0 podman[97311]: 2026-01-23 09:53:54.98800128 +0000 UTC m=+0.146937591 container attach 7d75bc90952803e2c2d664208bf2799efbdeb3861b6b5ab41e178ce62ff3b17f (image=quay.io/ceph/grafana:10.4.0, name=frosty_taussig, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:54 compute-0 podman[97311]: 2026-01-23 09:53:54.988505353 +0000 UTC m=+0.147441644 container died 7d75bc90952803e2c2d664208bf2799efbdeb3861b6b5ab41e178ce62ff3b17f (image=quay.io/ceph/grafana:10.4.0, name=frosty_taussig, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3e150da0f197481f1ec650e717b95438495fbf1807e72f64637e21b858c3967-merged.mount: Deactivated successfully.
Jan 23 09:53:55 compute-0 podman[97311]: 2026-01-23 09:53:55.040465918 +0000 UTC m=+0.199402189 container remove 7d75bc90952803e2c2d664208bf2799efbdeb3861b6b5ab41e178ce62ff3b17f (image=quay.io/ceph/grafana:10.4.0, name=frosty_taussig, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:55 compute-0 systemd[1]: libpod-conmon-7d75bc90952803e2c2d664208bf2799efbdeb3861b6b5ab41e178ce62ff3b17f.scope: Deactivated successfully.
Jan 23 09:53:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 23 09:53:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 23 09:53:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 23 09:53:55 compute-0 systemd[1]: Reloading.
Jan 23 09:53:55 compute-0 systemd-rc-local-generator[97379]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:55 compute-0 systemd-sysv-generator[97382]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:55 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:55 compute-0 systemd[1]: Reloading.
Jan 23 09:53:55 compute-0 systemd-rc-local-generator[97420]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:55 compute-0 systemd-sysv-generator[97423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 23 09:53:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 23 09:53:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 23 09:53:55 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 23 09:53:55 compute-0 ceph-mon[74335]: osdmap e104: 3 total, 3 up, 3 in
Jan 23 09:53:55 compute-0 ceph-mon[74335]: pgmap v144: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 23 09:53:55 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 23 09:53:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:55 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc80091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:55 compute-0 sudo[96957]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:55 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:53:55 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 3ffbb7ad-666e-4673-b49f-532754232886 (Global Recovery Event) in 10 seconds
Jan 23 09:53:56 compute-0 podman[97501]: 2026-01-23 09:53:56.07266522 +0000 UTC m=+0.061546818 container create a54bba5b68ea44a8d28033c77b2e521ac5290ce8b599976a9c0c4e403ef44f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:56 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adafe46092c5a544096e22a442a1d44aeb796056587fe58494295a7d0ee6deb4/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adafe46092c5a544096e22a442a1d44aeb796056587fe58494295a7d0ee6deb4/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adafe46092c5a544096e22a442a1d44aeb796056587fe58494295a7d0ee6deb4/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adafe46092c5a544096e22a442a1d44aeb796056587fe58494295a7d0ee6deb4/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adafe46092c5a544096e22a442a1d44aeb796056587fe58494295a7d0ee6deb4/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:56 compute-0 podman[97501]: 2026-01-23 09:53:56.131581006 +0000 UTC m=+0.120462554 container init a54bba5b68ea44a8d28033c77b2e521ac5290ce8b599976a9c0c4e403ef44f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:56 compute-0 podman[97501]: 2026-01-23 09:53:56.136898951 +0000 UTC m=+0.125780479 container start a54bba5b68ea44a8d28033c77b2e521ac5290ce8b599976a9c0c4e403ef44f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:53:56 compute-0 podman[97501]: 2026-01-23 09:53:56.04529971 +0000 UTC m=+0.034181258 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 23 09:53:56 compute-0 bash[97501]: a54bba5b68ea44a8d28033c77b2e521ac5290ce8b599976a9c0c4e403ef44f21
Jan 23 09:53:56 compute-0 sshd-session[96382]: Connection closed by 192.168.122.30 port 44424
Jan 23 09:53:56 compute-0 systemd[1]: Started Ceph grafana.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:53:56 compute-0 sshd-session[96355]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:53:56 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 23 09:53:56 compute-0 systemd[1]: session-37.scope: Consumed 10.672s CPU time.
Jan 23 09:53:56 compute-0 systemd-logind[784]: Session 37 logged out. Waiting for processes to exit.
Jan 23 09:53:56 compute-0 systemd-logind[784]: Removed session 37.
Jan 23 09:53:56 compute-0 sudo[96996]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:53:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:53:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 23 09:53:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev c7c0a2f4-2925-445b-b252-c747df94ca5a (Updating grafana deployment (+1 -> 1))
Jan 23 09:53:56 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event c7c0a2f4-2925-445b-b252-c747df94ca5a (Updating grafana deployment (+1 -> 1)) in 18 seconds
Jan 23 09:53:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 23 09:53:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 4ad5b0fc-8efd-4184-90a7-cf60ba4b44f2 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 23 09:53:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Jan 23 09:53:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.qabsws on compute-0
Jan 23 09:53:56 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.qabsws on compute-0
Jan 23 09:53:56 compute-0 sudo[97536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:53:56 compute-0 sudo[97536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:56 compute-0 sudo[97536]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.359794443Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-23T09:53:56Z
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360195744Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360204754Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360209065Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360213025Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360217455Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360222775Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360227085Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360231815Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360236265Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360240775Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360245266Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360249596Z level=info msg=Target target=[all]
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360260336Z level=info msg="Path Home" path=/usr/share/grafana
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360264806Z level=info msg="Path Data" path=/var/lib/grafana
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360268996Z level=info msg="Path Logs" path=/var/log/grafana
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360273036Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360277336Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=settings t=2026-01-23T09:53:56.360282527Z level=info msg="App mode production"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=sqlstore t=2026-01-23T09:53:56.3607726Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=sqlstore t=2026-01-23T09:53:56.360798161Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.361695115Z level=info msg="Starting DB migrations"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.364011939Z level=info msg="Executing migration" id="create migration_log table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.365451658Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.439079ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.368112841Z level=info msg="Executing migration" id="create user table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.368986285Z level=info msg="Migration successfully executed" id="create user table" duration=873.824µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.371469833Z level=info msg="Executing migration" id="add unique index user.login"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.372268495Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=802.252µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.374525297Z level=info msg="Executing migration" id="add unique index user.email"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.375231547Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=700.759µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.377365605Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.378313181Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=962.196µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.380172932Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.380861581Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=688.489µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.385950641Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.388303465Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.358955ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.390102644Z level=info msg="Executing migration" id="create user table v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.390890336Z level=info msg="Migration successfully executed" id="create user table v2" duration=787.742µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.393764015Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.394458054Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=694.099µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.396293934Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.396912261Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=618.917µs
Jan 23 09:53:56 compute-0 sudo[97561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:53:56 compute-0 sudo[97561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.401018144Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.401456016Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=437.932µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.403181403Z level=info msg="Executing migration" id="Drop old table user_v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.403708607Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=523.614µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.405174068Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.406140424Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=966.637µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.407917863Z level=info msg="Executing migration" id="Update user table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.407964984Z level=info msg="Migration successfully executed" id="Update user table charset" duration=47.921µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.409783164Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.410705789Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=921.605µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.413372922Z level=info msg="Executing migration" id="Add missing user data"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.413569668Z level=info msg="Migration successfully executed" id="Add missing user data" duration=197.086µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.415878261Z level=info msg="Executing migration" id="Add is_disabled column to user"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.416783586Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=905.035µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.418258666Z level=info msg="Executing migration" id="Add index user.login/user.email"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.418902124Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=642.708µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.421047323Z level=info msg="Executing migration" id="Add is_service_account column to user"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.422563534Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.522001ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.424630191Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.433166885Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.535104ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.435225192Z level=info msg="Executing migration" id="Add uid column to user"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.436503757Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.277575ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.43844766Z level=info msg="Executing migration" id="Update uid column values for users"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.438647535Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=199.395µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.440323611Z level=info msg="Executing migration" id="Add unique index user_uid"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.441059312Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=737.73µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.443502189Z level=info msg="Executing migration" id="create temp user table v1-7"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.444405673Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=845.614µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.446606444Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.447345414Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=738.51µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.449607956Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.450223633Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=616.877µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.4523158Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.452916767Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=600.697µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.454837049Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.455533228Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=696.709µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.457877233Z level=info msg="Executing migration" id="Update temp_user table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.457901353Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=25.08µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.460200166Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.46106625Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=870.464µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.462843249Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.463415295Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=571.736µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.466592322Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.467174918Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=583.216µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.469321797Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.469937353Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=616.747µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.473247694Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.476154514Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.90972ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.478031005Z level=info msg="Executing migration" id="create temp_user v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.4789321Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=901.385µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.481182342Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.482032215Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=852.283µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.484104652Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.484898874Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=795.122µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.490090326Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.490943689Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=858.093µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.493235962Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.493969692Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=735.29µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.496210774Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.496604225Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=394.081µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.498301071Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.498890907Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=589.356µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.50046515Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.500906993Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=442.002µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.502511937Z level=info msg="Executing migration" id="create star table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.503089962Z level=info msg="Migration successfully executed" id="create star table" duration=577.965µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.505574251Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.50627976Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=706.469µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.509774546Z level=info msg="Executing migration" id="create org table v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.51065659Z level=info msg="Migration successfully executed" id="create org table v1" duration=885.884µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.513489068Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.514165126Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=676.238µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.516711396Z level=info msg="Executing migration" id="create org_user table v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.517378724Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=634.587µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.519961775Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.520674325Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=712.64µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.52269207Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.523384039Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=693.869µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.526841704Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.527680207Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=838.463µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.531245894Z level=info msg="Executing migration" id="Update org table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.531306586Z level=info msg="Migration successfully executed" id="Update org table charset" duration=61.232µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.534755821Z level=info msg="Executing migration" id="Update org_user table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.534808912Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=52.011µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.538189525Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.538500443Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=313.208µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.540254461Z level=info msg="Executing migration" id="create dashboard table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.541101455Z level=info msg="Migration successfully executed" id="create dashboard table" duration=846.784µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.54420612Z level=info msg="Executing migration" id="add index dashboard.account_id"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.544999702Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=794.372µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.54748313Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.54894873Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.46494ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.551214992Z level=info msg="Executing migration" id="create dashboard_tag table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.551941822Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=723.84µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.557418382Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.55842016Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=931.506µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.56208215Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.562923613Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=843.573µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.565614207Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.570527902Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.912394ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.572785584Z level=info msg="Executing migration" id="create dashboard v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.573572835Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=784.892µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.576144776Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.576844555Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=699.769µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.581389329Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.58212686Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=737.681µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.584380831Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.584735851Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=355.02µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.586582092Z level=info msg="Executing migration" id="drop table dashboard_v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.587695322Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.11161ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.591130597Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.59125884Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=131.574µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.593387208Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.595815215Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.419047ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.598476008Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.600143024Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.672036ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.603818324Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.605742477Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.927503ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.608162324Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.609178321Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.018788ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.613023297Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.614765245Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.745968ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.616765059Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.617707935Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=944.846µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.621188761Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.621964552Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=776.291µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.625293813Z level=info msg="Executing migration" id="Update dashboard table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.625322304Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=30.091µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.627849943Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.627912895Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=63.952µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.640579972Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.642613988Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.038426ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.645160068Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.647131222Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.975564ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.649219119Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.651223564Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.004535ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.653907258Z level=info msg="Executing migration" id="Add column uid in dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.655678606Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.770878ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.657556698Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.657778534Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=222.416µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.659725717Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.660669573Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=943.616µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.663049909Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.663806919Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=758.051µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.665534397Z level=info msg="Executing migration" id="Update dashboard title length"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.665577388Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=44.701µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.667795069Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.668598561Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=800.372µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.670997376Z level=info msg="Executing migration" id="create dashboard_provisioning"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.671938582Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=946.026µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.674282737Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.678570774Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.287418ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.68022485Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.680817986Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=593.426µs
Jan 23 09:53:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.684011893Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.685172975Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.163742ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.687617362Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.688503947Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=885.874µs
Jan 23 09:53:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.692904997Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.693235546Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=331.179µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.694950513Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.695482778Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=532.705µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.697082142Z level=info msg="Executing migration" id="Add check_sum column"
Jan 23 09:53:56 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.698904282Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.82117ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.70103248Z level=info msg="Executing migration" id="Add index for dashboard_title"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.701713469Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=684.499µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.703364754Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.703516498Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=166.295µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.705718539Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.705876413Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=157.735µs
Jan 23 09:53:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 23 09:53:56 compute-0 ceph-mon[74335]: osdmap e105: 3 total, 3 up, 3 in
Jan 23 09:53:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:56 compute-0 ceph-mon[74335]: Deploying daemon haproxy.rgw.default.compute-0.qabsws on compute-0
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.708260258Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.70906782Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=808.162µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.711703333Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.713633396Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.928752ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.715396474Z level=info msg="Executing migration" id="create data_source table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.716604007Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.207003ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.719127726Z level=info msg="Executing migration" id="add index data_source.account_id"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.719806085Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=678.289µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.722389536Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.723043364Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=654.078µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.725396188Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.726073687Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=678.099µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.727898857Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.728556125Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=654.538µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.730383845Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.735196707Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.811142ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.737463819Z level=info msg="Executing migration" id="create data_source table v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.738739374Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.281925ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.740562304Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.741545111Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=982.577µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.743792303Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.744625995Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=832.262µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.747279948Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.747995008Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=721.41µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.750103866Z level=info msg="Executing migration" id="Add column with_credentials"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.75245326Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.347205ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.756079469Z level=info msg="Executing migration" id="Add secure json data column"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.758849545Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.771296ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.760753178Z level=info msg="Executing migration" id="Update data_source table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.760802009Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=50.832µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.763780891Z level=info msg="Executing migration" id="Update initial version to 1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.764014097Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=234.986µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.765972361Z level=info msg="Executing migration" id="Add read_only data column"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.768380537Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.411736ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.771434871Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.771667637Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=235.727µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.773438555Z level=info msg="Executing migration" id="Update json_data with nulls"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.773632441Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=191.096µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.775383019Z level=info msg="Executing migration" id="Add uid column"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.777795895Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.412676ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.779962834Z level=info msg="Executing migration" id="Update uid value"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.780311244Z level=info msg="Migration successfully executed" id="Update uid value" duration=352.42µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.782432692Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.783289416Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=860.144µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.785515737Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.786261527Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=745.87µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.788770766Z level=info msg="Executing migration" id="create api_key table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.78963939Z level=info msg="Migration successfully executed" id="create api_key table" duration=865.524µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.792864888Z level=info msg="Executing migration" id="add index api_key.account_id"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.79366315Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=796.242µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.796667312Z level=info msg="Executing migration" id="add index api_key.key"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.797390622Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=723.86µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.799973303Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.80097034Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=998.367µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.804000453Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.805161215Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.165152ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.807175771Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.80789902Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=723.74µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.810989705Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.811840238Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=850.753µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.814060229Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Jan 23 09:53:56 compute-0 podman[97627]: 2026-01-23 09:53:56.81698496 +0000 UTC m=+0.045127239 container create 623ad18f7be6db4604237dd5ab1fce350c7ccfd897dba8598a28844f7188dfe4 (image=quay.io/ceph/haproxy:2.3, name=beautiful_shockley)
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.819779026Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.711817ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.823056956Z level=info msg="Executing migration" id="create api_key table v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.824431424Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.379878ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.827252261Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.828454574Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.200343ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.830560102Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.831472777Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=914.575µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.833396659Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.834166781Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=770.252µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.837035329Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.837666577Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=636.328µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.839588389Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.840306619Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=718.88µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.842581871Z level=info msg="Executing migration" id="Update api_key table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.842606742Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=26.731µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.844573436Z level=info msg="Executing migration" id="Add expires to api_key table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.84689711Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.323144ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.849027518Z level=info msg="Executing migration" id="Add service account foreign key"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.85128443Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.257212ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.853294835Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.853535382Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=241.517µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.855481365Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.857557792Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.072427ms
Jan 23 09:53:56 compute-0 systemd[1]: Started libpod-conmon-623ad18f7be6db4604237dd5ab1fce350c7ccfd897dba8598a28844f7188dfe4.scope.
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.859429223Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.861390317Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.961354ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.863458544Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.864114472Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=652.348µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.875677929Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.876571903Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=905.395µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.878593799Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.879438702Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=844.543µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.881209451Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.881947811Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=738.551µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.884799519Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.885814837Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.020988ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.889184249Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.890721471Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.540322ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.893444156Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.893500838Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=57.932µs
Jan 23 09:53:56 compute-0 podman[97627]: 2026-01-23 09:53:56.797451914 +0000 UTC m=+0.025594223 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.895462451Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.895486212Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=24.901µs
Jan 23 09:53:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.897910689Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.900711815Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.801687ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.903321457Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.905844446Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.526189ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.908766766Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.908855089Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=92.283µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.910888484Z level=info msg="Executing migration" id="create quota table v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.911713107Z level=info msg="Migration successfully executed" id="create quota table v1" duration=822.653µs
Jan 23 09:53:56 compute-0 podman[97627]: 2026-01-23 09:53:56.912721505 +0000 UTC m=+0.140863804 container init 623ad18f7be6db4604237dd5ab1fce350c7ccfd897dba8598a28844f7188dfe4 (image=quay.io/ceph/haproxy:2.3, name=beautiful_shockley)
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.914163454Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.914923605Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=760.161µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.917025933Z level=info msg="Executing migration" id="Update quota table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.917061274Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=36.651µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.919007277Z level=info msg="Executing migration" id="create plugin_setting table"
Jan 23 09:53:56 compute-0 podman[97627]: 2026-01-23 09:53:56.919381927 +0000 UTC m=+0.147524206 container start 623ad18f7be6db4604237dd5ab1fce350c7ccfd897dba8598a28844f7188dfe4 (image=quay.io/ceph/haproxy:2.3, name=beautiful_shockley)
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.919792828Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=786.401µs
Jan 23 09:53:56 compute-0 beautiful_shockley[97644]: 0 0
Jan 23 09:53:56 compute-0 systemd[1]: libpod-623ad18f7be6db4604237dd5ab1fce350c7ccfd897dba8598a28844f7188dfe4.scope: Deactivated successfully.
Jan 23 09:53:56 compute-0 podman[97627]: 2026-01-23 09:53:56.92603221 +0000 UTC m=+0.154174519 container attach 623ad18f7be6db4604237dd5ab1fce350c7ccfd897dba8598a28844f7188dfe4 (image=quay.io/ceph/haproxy:2.3, name=beautiful_shockley)
Jan 23 09:53:56 compute-0 podman[97627]: 2026-01-23 09:53:56.927166041 +0000 UTC m=+0.155308340 container died 623ad18f7be6db4604237dd5ab1fce350c7ccfd897dba8598a28844f7188dfe4 (image=quay.io/ceph/haproxy:2.3, name=beautiful_shockley)
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.927090099Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.928347163Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.259275ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.931733396Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.934637645Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.900989ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.938334077Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.938416929Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=98.092µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.943275362Z level=info msg="Executing migration" id="create session table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.944785124Z level=info msg="Migration successfully executed" id="create session table" duration=1.513622ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.948635119Z level=info msg="Executing migration" id="Drop old table playlist table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.948867826Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=233.597µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.951055556Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.951143538Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=89.272µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.954554132Z level=info msg="Executing migration" id="create playlist table v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.955754815Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.201213ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.960061953Z level=info msg="Executing migration" id="create playlist item table v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.961241975Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.184493ms
Jan 23 09:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a526f21359d9c874599bbe4228f6290b723bc8b446bc070f02c2e1978d5bd165-merged.mount: Deactivated successfully.
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.964217937Z level=info msg="Executing migration" id="Update playlist table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.964291879Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=75.632µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.966429677Z level=info msg="Executing migration" id="Update playlist_item table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.966453738Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=25.441µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.969733028Z level=info msg="Executing migration" id="Add playlist column created_at"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.972395961Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.613691ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.977409428Z level=info msg="Executing migration" id="Add playlist column updated_at"
Jan 23 09:53:56 compute-0 podman[97627]: 2026-01-23 09:53:56.979332311 +0000 UTC m=+0.207474590 container remove 623ad18f7be6db4604237dd5ab1fce350c7ccfd897dba8598a28844f7188dfe4 (image=quay.io/ceph/haproxy:2.3, name=beautiful_shockley)
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.979875126Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.471008ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.983445324Z level=info msg="Executing migration" id="drop preferences table v2"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.983604678Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=165.434µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.98587168Z level=info msg="Executing migration" id="drop preferences table v3"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.985952353Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=83.023µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.988711208Z level=info msg="Executing migration" id="create preferences table v3"
Jan 23 09:53:56 compute-0 systemd[1]: libpod-conmon-623ad18f7be6db4604237dd5ab1fce350c7ccfd897dba8598a28844f7188dfe4.scope: Deactivated successfully.
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.989708956Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.000208ms
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.991942097Z level=info msg="Executing migration" id="Update preferences table charset"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.991965878Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=30.71µs
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.994727323Z level=info msg="Executing migration" id="Add column team_id in preferences"
Jan 23 09:53:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:56.997473319Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.745625ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.000050749Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.000212774Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=163.845µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.002498726Z level=info msg="Executing migration" id="Add column week_start in preferences"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.005050756Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.55277ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.008150781Z level=info msg="Executing migration" id="Add column preferences.json_data"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.010589878Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.439837ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.012916592Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.013003744Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=89.562µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.016237543Z level=info msg="Executing migration" id="Add preferences index org_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.017093306Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=856.043µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.019854482Z level=info msg="Executing migration" id="Add preferences index user_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.020704445Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=849.943µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.024216282Z level=info msg="Executing migration" id="create alert table v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.025278991Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.062319ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.028396536Z level=info msg="Executing migration" id="add index alert org_id & id "
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.029208569Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=812.083µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.033375673Z level=info msg="Executing migration" id="add index alert state"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.034127884Z level=info msg="Migration successfully executed" id="add index alert state" duration=756.101µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.037223608Z level=info msg="Executing migration" id="add index alert dashboard_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.038625887Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.402409ms
Jan 23 09:53:57 compute-0 systemd[1]: Reloading.
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.042825042Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.043746737Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=926.125µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.046886043Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.048065996Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.184323ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.050914094Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.051717966Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=804.932µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.053536366Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.061536775Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.995549ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.064683281Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.065639598Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=956.227µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.067828208Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.06863955Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=811.792µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.071084567Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.071514799Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=429.912µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.073458972Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.074203263Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=744.31µs
Jan 23 09:53:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 23 09:53:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.076619239Z level=info msg="Executing migration" id="create alert_notification table v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.07739428Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=777.681µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.080058903Z level=info msg="Executing migration" id="Add column is_default"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.084434573Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.37191ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.087777035Z level=info msg="Executing migration" id="Add column frequency"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.093232254Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.455909ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.096499204Z level=info msg="Executing migration" id="Add column send_reminder"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.10001159Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.513256ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.10329916Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.106179239Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.880179ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.110408235Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.111521636Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.112661ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.114855757Z level=info msg="Executing migration" id="Update alert table charset"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.114908299Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=57.242µs
Jan 23 09:53:57 compute-0 systemd-rc-local-generator[97692]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.117664664Z level=info msg="Executing migration" id="Update alert_notification table charset"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.117717326Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=61.162µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.120540393Z level=info msg="Executing migration" id="create notification_journal table v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.121551611Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.006998ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.124198623Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.125262263Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.063819ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.127639638Z level=info msg="Executing migration" id="drop alert_notification_journal"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.128519952Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=876.104µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.130617839Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.131588166Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=968.867µs
Jan 23 09:53:57 compute-0 systemd-sysv-generator[97695]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.133479638Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.134225918Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=746.22µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.136498181Z level=info msg="Executing migration" id="Add for to alert table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.139634467Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.140846ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.141522268Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.144697765Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.175017ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.146747512Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.146929517Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=182.505µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.149082136Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.150184976Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.101901ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.153216589Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.154179595Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=964.186µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.15615014Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.159219094Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.068625ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.161939808Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.162023611Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=84.922µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.164741125Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.16565538Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=915.225µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.180122427Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.181495394Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.377747ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.184469756Z level=info msg="Executing migration" id="Drop old annotation table v4"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.184573339Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=104.263µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.190689706Z level=info msg="Executing migration" id="create annotation table v5"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.191738425Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.049429ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.20248903Z level=info msg="Executing migration" id="add index annotation 0 v3"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.203552189Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.069079ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.211275321Z level=info msg="Executing migration" id="add index annotation 1 v3"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.212442763Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.170672ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.215374263Z level=info msg="Executing migration" id="add index annotation 2 v3"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.216263598Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=889.805µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.221210893Z level=info msg="Executing migration" id="add index annotation 3 v3"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.222222411Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.015278ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.228260127Z level=info msg="Executing migration" id="add index annotation 4 v3"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.229768178Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.509241ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.233070319Z level=info msg="Executing migration" id="Update annotation table charset"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.233167601Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=104.113µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.236256516Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.240230685Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.970209ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.243499815Z level=info msg="Executing migration" id="Drop category_id index"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.244566734Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.07057ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.246888538Z level=info msg="Executing migration" id="Add column tags to annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.250220369Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.330772ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.252620485Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.253378675Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=762.05µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.255597926Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.256873501Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.280275ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.260314786Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.261641222Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.331927ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.263565135Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.272185351Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.615356ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.276068578Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.277043794Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=978.567µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.27907051Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.279905793Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=834.353µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.282313729Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.282838333Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=525.424µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.285821755Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.286461343Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=645.148µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.288650123Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.288867139Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=217.776µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.291193512Z level=info msg="Executing migration" id="Add created time to annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.294568155Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.372083ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.297178166Z level=info msg="Executing migration" id="Add updated time to annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.300294112Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.116736ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.302267086Z level=info msg="Executing migration" id="Add index for created in annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.303314565Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.046499ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.305744721Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.306625455Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=880.554µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.309124884Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.309404442Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=281.308µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.311847119Z level=info msg="Executing migration" id="Add epoch_end column"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.315021356Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.169326ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.317232136Z level=info msg="Executing migration" id="Add index for epoch_end"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.318045369Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=813.423µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.320393463Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.320628109Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=236.876µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.322803119Z level=info msg="Executing migration" id="Move region to single row"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.32318871Z level=info msg="Migration successfully executed" id="Move region to single row" duration=386.591µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.324976909Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.325890504Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=913.305µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.331870908Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.333090321Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.229014ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.335821266Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.336895886Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.077539ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.339156108Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.340128114Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=974.257µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.341940024Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.342727495Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=787.721µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.344656338Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.345365298Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=709.6µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.347207118Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.34726323Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=56.702µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.349717227Z level=info msg="Executing migration" id="create test_data table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.3505381Z level=info msg="Migration successfully executed" id="create test_data table" duration=817.402µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.352995977Z level=info msg="Executing migration" id="create dashboard_version table v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.353878041Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=879.524µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.356798131Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.357722047Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=927.296µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.360485112Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.361808749Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.327467ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.364642426Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.364924364Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=286.998µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.368136602Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.368595255Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=457.773µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.370428445Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.370492187Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=64.692µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.372720078Z level=info msg="Executing migration" id="create team table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.37354194Z level=info msg="Migration successfully executed" id="create team table" duration=821.562µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.376324697Z level=info msg="Executing migration" id="add index team.org_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.377284933Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.015168ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.37973551Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.380740498Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.005338ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.384220743Z level=info msg="Executing migration" id="Add column uid in team"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.387978176Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.759693ms
Jan 23 09:53:57 compute-0 systemd[1]: Reloading.
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.390617719Z level=info msg="Executing migration" id="Update uid column values in team"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.390943088Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=326.899µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.393317943Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.394553536Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.259474ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.39723595Z level=info msg="Executing migration" id="create team member table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.398252958Z level=info msg="Migration successfully executed" id="create team member table" duration=1.017538ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.400926631Z level=info msg="Executing migration" id="add index team_member.org_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.402055352Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.132781ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.4056253Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.406590717Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=966.666µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.40963533Z level=info msg="Executing migration" id="add index team_member.team_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.41072672Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.09565ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.413460065Z level=info msg="Executing migration" id="Add column email to team table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.417788074Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.326129ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.419971954Z level=info msg="Executing migration" id="Add column external to team_member table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.424260231Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.287788ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.426468692Z level=info msg="Executing migration" id="Add column permission to team_member table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.430187614Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.717592ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.432223169Z level=info msg="Executing migration" id="create dashboard acl table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.433488964Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.265225ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.437305689Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.439246132Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.942743ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.441979577Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.442986125Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.006407ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.447370585Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.448572718Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.203203ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.455929149Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.457324558Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.399739ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.45994621Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Jan 23 09:53:57 compute-0 systemd-rc-local-generator[97734]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.461102951Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.158011ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.463679472Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.464797363Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.117901ms
Jan 23 09:53:57 compute-0 systemd-sysv-generator[97737]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.469007858Z level=info msg="Executing migration" id="add index dashboard_permission"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.470075577Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.070799ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.472242077Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.472970687Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=728.32µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.475002342Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.475253399Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=251.327µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.477685316Z level=info msg="Executing migration" id="create tag table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.478591311Z level=info msg="Migration successfully executed" id="create tag table" duration=905.175µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.483033793Z level=info msg="Executing migration" id="add index tag.key_value"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.484882943Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.852941ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.487293529Z level=info msg="Executing migration" id="create login attempt table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.488071251Z level=info msg="Migration successfully executed" id="create login attempt table" duration=777.452µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:57 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.49132341Z level=info msg="Executing migration" id="add index login_attempt.username"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.492692357Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.371447ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.495131964Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.49643321Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.303226ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.500307146Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.511567985Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=11.254039ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.513805926Z level=info msg="Executing migration" id="create login_attempt v2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.514725252Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=923.416µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.51868636Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.51977506Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.08675ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.522382442Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.522754812Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=373.22µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.524669904Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.525291121Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=621.897µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.527561604Z level=info msg="Executing migration" id="create user auth table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.528196301Z level=info msg="Migration successfully executed" id="create user auth table" duration=634.607µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.530388811Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.531537643Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.152952ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.534015741Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.534124324Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=110.273µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.536521299Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.541997509Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.47212ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.544848198Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.549366082Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.501504ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.551790288Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.556401724Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.611566ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.557851684Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.561537935Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.683301ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.564087695Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.565244357Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.156542ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.56789852Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.571750795Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.851055ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.573675268Z level=info msg="Executing migration" id="create server_lock table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.57447742Z level=info msg="Migration successfully executed" id="create server_lock table" duration=802.982µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.576772863Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.577643017Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=870.874µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.579727174Z level=info msg="Executing migration" id="create user auth token table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.580527426Z level=info msg="Migration successfully executed" id="create user auth token table" duration=800.322µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.583742164Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.585096111Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.359127ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.588280769Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.589561944Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.288946ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.592115604Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.593231904Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.11766ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.595519247Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.600226336Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.699929ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.602649023Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.603631049Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=982.127µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.606149048Z level=info msg="Executing migration" id="create cache_data table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.606959251Z level=info msg="Migration successfully executed" id="create cache_data table" duration=810.373µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.610784656Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.611776903Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=992.537µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.614064125Z level=info msg="Executing migration" id="create short_url table v1"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.614929599Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=866.014µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.617420988Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.618277321Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=856.634µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.622052135Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.622162528Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=113.874µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.624678627Z level=info msg="Executing migration" id="delete alert_definition table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.6248051Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=126.914µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.626444765Z level=info msg="Executing migration" id="recreate alert_definition table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.627232087Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=788.751µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.629502519Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.630396393Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=894.594µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.632379478Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.633311143Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=932.095µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.635413251Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.635513044Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=101.813µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.637523869Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.638456294Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=932.195µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.640769498Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.642016892Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.248384ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.644454839Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.645649952Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.195493ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.647729269Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.648796768Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.067489ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.650953457Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.655823991Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.867993ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.657972209Z level=info msg="Executing migration" id="drop alert_definition table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.65909372Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.121201ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.661079585Z level=info msg="Executing migration" id="delete alert_definition_version table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.661177117Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=98.242µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.663733787Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.66457383Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=840.633µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.666254057Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.667246894Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=993.118µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.669130875Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.670086082Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=953.767µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.672048005Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.672163889Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=113.183µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.674210545Z level=info msg="Executing migration" id="drop alert_definition_version table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.675550272Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.343707ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.677394152Z level=info msg="Executing migration" id="create alert_instance table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.678263946Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=869.704µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.67987756Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.680872767Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=994.787µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.682655276Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.683499969Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=845.003µs
Jan 23 09:53:57 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.qabsws for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.685509285Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.689681049Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.170984ms
Jan 23 09:53:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.693085722Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.693960546Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=876.514µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.695761186Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.696985089Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.223843ms
Jan 23 09:53:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 23 09:53:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.702031938Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Jan 23 09:53:57 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.728556325Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.517347ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.731043093Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.751583706Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=20.535893ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.754525837Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.755493883Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=967.746µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.757470728Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.758478025Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.007927ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.760676526Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.764976694Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.296097ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.766755182Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.771231555Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.475743ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.774140265Z level=info msg="Executing migration" id="create alert_rule table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.775306807Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.166582ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.777681052Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.778565726Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=884.954µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.782057352Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.783004288Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=947.306µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.786313649Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.787332847Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.019487ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.789809774Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.789905107Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=97.063µs
Jan 23 09:53:57 compute-0 ceph-mon[74335]: osdmap e106: 3 total, 3 up, 3 in
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:57 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:57 compute-0 ceph-mon[74335]: pgmap v147: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:53:57 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 23 09:53:57 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.791724977Z level=info msg="Executing migration" id="add column for to alert_rule"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.797640299Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.909062ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.801058363Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.807449128Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.385575ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.809567996Z level=info msg="Executing migration" id="add column labels to alert_rule"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.8155315Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.963684ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.817549815Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.818751678Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.201823ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.820597689Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.821711619Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.11345ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.823592991Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.829199275Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.602453ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.831393745Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.836826584Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.431589ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.838781977Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.839894508Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.118111ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.84217602Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.848073062Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.884732ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.851621569Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.85746943Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.84676ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.859418873Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.859522536Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=103.933µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.861627874Z level=info msg="Executing migration" id="create alert_rule_version table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.862668922Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.041578ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.866479367Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.867495745Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.017208ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.870465286Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.871524185Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.058489ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.875038801Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.875116044Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=78.583µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.878172677Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.883672558Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.495851ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.885823177Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.890380832Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.557335ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.892414408Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.896966393Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.550145ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.899159183Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.904410407Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.250214ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.906517205Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.911269585Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.74988ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.913045864Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.913140206Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=101.083µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.914919795Z level=info msg="Executing migration" id=create_alert_configuration_table
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.915670646Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=750.261µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.917587548Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.922491383Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.903635ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.924154828Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.92423278Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=78.562µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.926233925Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.930750329Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.515014ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.932712983Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.933626318Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=914.025µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.935916051Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Jan 23 09:53:57 compute-0 podman[97788]: 2026-01-23 09:53:57.940528837 +0000 UTC m=+0.041943151 container create 872bb66aeaa09288ead0a99e17c29682960b011c0b0f7af2b0513c1ab79aba61 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-rgw-default-compute-0-qabsws)
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.940938548Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.016627ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.943739415Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.944593199Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=855.474µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.951634162Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.952827554Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.196792ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.955725254Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.961015069Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=5.292615ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.964012681Z level=info msg="Executing migration" id="create provenance_type table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.965035829Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.025958ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.967421065Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.968520935Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.100101ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.971600729Z level=info msg="Executing migration" id="create alert_image table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.972596996Z level=info msg="Migration successfully executed" id="create alert_image table" duration=999.517µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.974892009Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.975972529Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.08113ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.978674943Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.978819557Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=150.044µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.980649977Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.981806319Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.157202ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.984085732Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.985527701Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.441749ms
Jan 23 09:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceecdec01e4df8cb55ab55bc2d40efca340d57ab353df5123ef66bdd22eecc4a/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.98730601Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.987786613Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.990556589Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.991073743Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=517.074µs
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.992565874Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.993848109Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.282235ms
Jan 23 09:53:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:57.996104641Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Jan 23 09:53:57 compute-0 podman[97788]: 2026-01-23 09:53:57.9978881 +0000 UTC m=+0.099302434 container init 872bb66aeaa09288ead0a99e17c29682960b011c0b0f7af2b0513c1ab79aba61 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-rgw-default-compute-0-qabsws)
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.002581579Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.463037ms
Jan 23 09:53:58 compute-0 podman[97788]: 2026-01-23 09:53:58.003084342 +0000 UTC m=+0.104498646 container start 872bb66aeaa09288ead0a99e17c29682960b011c0b0f7af2b0513c1ab79aba61 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-rgw-default-compute-0-qabsws)
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.005280583Z level=info msg="Executing migration" id="create library_element table v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.00664096Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.362867ms
Jan 23 09:53:58 compute-0 bash[97788]: 872bb66aeaa09288ead0a99e17c29682960b011c0b0f7af2b0513c1ab79aba61
Jan 23 09:53:58 compute-0 podman[97788]: 2026-01-23 09:53:57.922902454 +0000 UTC m=+0.024316788 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.0102959Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.011683868Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.388788ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.014319841Z level=info msg="Executing migration" id="create library_element_connection table v1"
Jan 23 09:53:58 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.qabsws for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.015233686Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=915.146µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.017708973Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-rgw-default-compute-0-qabsws[97804]: [NOTICE] 022/095358 (2) : New worker #1 (4) forked
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.018714241Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.005338ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.022659839Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.02378705Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.129851ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.026290999Z level=info msg="Executing migration" id="increase max description length to 2048"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.02631653Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=27.011µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.0281684Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.028226712Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=59.122µs
Jan 23 09:53:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.03032619Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.03070666Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=381.271µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.032836968Z level=info msg="Executing migration" id="create data_keys table"
Jan 23 09:53:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.004000110s ======
Jan 23 09:53:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:58.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000110s
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.033956779Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.119521ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.037611839Z level=info msg="Executing migration" id="create secrets table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.038456142Z level=info msg="Migration successfully executed" id="create secrets table" duration=843.773µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.04091015Z level=info msg="Executing migration" id="rename data_keys name column to id"
Jan 23 09:53:58 compute-0 sudo[97561]: pam_unix(sudo:session): session closed for user root
Jan 23 09:53:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.072903787Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=31.971686ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.075713974Z level=info msg="Executing migration" id="add name column into data_keys"
Jan 23 09:53:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.084428243Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.706359ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.087428965Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.087796355Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=369.33µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.094705015Z level=info msg="Executing migration" id="rename data_keys name column to label"
Jan 23 09:53:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:58 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:58 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.izjwnk on compute-2
Jan 23 09:53:58 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.izjwnk on compute-2
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.131901555Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=37.191639ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.136024298Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.172326063Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=36.298276ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.175286964Z level=info msg="Executing migration" id="create kv_store table v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.176636851Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.350357ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.179289364Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.180786445Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.499731ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.183258993Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.183644033Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=386.2µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.185607047Z level=info msg="Executing migration" id="create permission table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.186644536Z level=info msg="Migration successfully executed" id="create permission table" duration=1.040779ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.190036259Z level=info msg="Executing migration" id="add unique index permission.role_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.191411766Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.379467ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.193773901Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.194780609Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.006508ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.197604746Z level=info msg="Executing migration" id="create role table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.199141258Z level=info msg="Migration successfully executed" id="create role table" duration=1.539582ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.202030248Z level=info msg="Executing migration" id="add column display_name"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.20867446Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.636692ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.211039355Z level=info msg="Executing migration" id="add column group_name"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.216657199Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.613213ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.219383363Z level=info msg="Executing migration" id="add index role.org_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.220658208Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.273635ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.22328527Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.224533155Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.251055ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.228228766Z level=info msg="Executing migration" id="add index role_org_id_uid"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.22947127Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.242794ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.231901457Z level=info msg="Executing migration" id="create team role table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.232886044Z level=info msg="Migration successfully executed" id="create team role table" duration=982.287µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.23530284Z level=info msg="Executing migration" id="add index team_role.org_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.236673287Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.371527ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.239611478Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.24077276Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.161352ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.243797723Z level=info msg="Executing migration" id="add index team_role.team_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.245302224Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.509731ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.248306867Z level=info msg="Executing migration" id="create user role table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.249447288Z level=info msg="Migration successfully executed" id="create user role table" duration=1.140992ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.252858811Z level=info msg="Executing migration" id="add index user_role.org_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.254167067Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.309816ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.257866009Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.259148224Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.282605ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.261729295Z level=info msg="Executing migration" id="add index user_role.user_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.262900597Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.171983ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.265753475Z level=info msg="Executing migration" id="create builtin role table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.266919667Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.164972ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.269294152Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.270517436Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.224203ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.274587777Z level=info msg="Executing migration" id="add index builtin_role.name"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.276240842Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.650445ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.279949204Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.287886882Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.935358ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.290954906Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.292339374Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.386068ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.295091379Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.29657485Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.485161ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.300190849Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.301609668Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.419619ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.303808528Z level=info msg="Executing migration" id="add unique index role.uid"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.305205067Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.397529ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.308034294Z level=info msg="Executing migration" id="create seed assignment table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.309096743Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.058089ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.311988873Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.313248377Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.271635ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.317448282Z level=info msg="Executing migration" id="add column hidden to role table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.324036383Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.584901ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.327452007Z level=info msg="Executing migration" id="permission kind migration"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.334713746Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.258829ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.34581724Z level=info msg="Executing migration" id="permission attribute migration"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.353342017Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.523106ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.356326688Z level=info msg="Executing migration" id="permission identifier migration"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.362846267Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.519029ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.365809818Z level=info msg="Executing migration" id="add permission identifier index"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.367507415Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.701807ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.370667262Z level=info msg="Executing migration" id="add permission action scope role_id index"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.37204879Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.381288ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.377862049Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.379408781Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.550692ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.3837309Z level=info msg="Executing migration" id="create query_history table v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.385143949Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.417189ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.39869658Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.400222562Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.530462ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.430811171Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.431110859Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=308.878µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.434618995Z level=info msg="Executing migration" id="rbac disabled migrator"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.43478823Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=173.625µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.439644253Z level=info msg="Executing migration" id="teams permissions migration"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.440412734Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=773.541µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.442968514Z level=info msg="Executing migration" id="dashboard permissions"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.443566961Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=598.796µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.446042088Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.446868281Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=828.393µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.450442229Z level=info msg="Executing migration" id="drop managed folder create actions"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.450753938Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=313.159µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.453786521Z level=info msg="Executing migration" id="alerting notification permissions"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.454447299Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=662.338µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.456578277Z level=info msg="Executing migration" id="create query_history_star table v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.457522213Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=943.416µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.460475234Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.461836222Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.361898ms
Jan 23 09:53:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.464836814Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.474050676Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.204862ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.477871711Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.478000635Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=134.134µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.480837613Z level=info msg="Executing migration" id="create correlation table v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.482237491Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.400039ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.486684453Z level=info msg="Executing migration" id="add index correlations.uid"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.490424395Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=3.743782ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.495555266Z level=info msg="Executing migration" id="add index correlations.source_uid"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.496760639Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.207893ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.50043967Z level=info msg="Executing migration" id="add correlation config column"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.507960016Z level=info msg="Migration successfully executed" id="add correlation config column" duration=7.509696ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.510767833Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.512183262Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.419709ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.514735332Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.516086809Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.357587ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.518596158Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.540948031Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.345382ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.543598514Z level=info msg="Executing migration" id="create correlation v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.546221175Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.623032ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.548502438Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.549728562Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.226854ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.556700013Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.558861772Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.16712ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.565071552Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.567042936Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.977934ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.570604424Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.571981992Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=1.389578ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.576533537Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.5777669Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.235184ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.581277457Z level=info msg="Executing migration" id="add provisioning column"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.587903308Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.619472ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.59162656Z level=info msg="Executing migration" id="create entity_events table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.592785132Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.160532ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.597122371Z level=info msg="Executing migration" id="create dashboard public config v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.598253242Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.161542ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.605058939Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.605760038Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.610501268Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.611107625Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.614463927Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.616830021Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=2.366655ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.619383341Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.62042225Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.038849ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.624833201Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.626223779Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.395008ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.63793708Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.639404451Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.417699ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.642734292Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.643864653Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.131061ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.646418883Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.647471562Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.055529ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.649867977Z level=info msg="Executing migration" id="Drop public config table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.65070111Z level=info msg="Migration successfully executed" id="Drop public config table" duration=835.593µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.653591599Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.654533875Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=942.456µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.660119138Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.661247779Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.130281ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.663666216Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.664761976Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.09493ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.667527192Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.668673863Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.148052ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.675193062Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Jan 23 09:53:58 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 107 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=107 pruub=13.646791458s) [2] r=-1 lpr=107 pi=[67,107)/1 crt=61'760 mlcod 0'0 active pruub 270.438262939s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:58 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 107 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=107 pruub=13.646709442s) [2] r=-1 lpr=107 pi=[67,107)/1 crt=61'760 mlcod 0'0 unknown NOTIFY pruub 270.438262939s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.698668045Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.467593ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.70137871Z level=info msg="Executing migration" id="add annotations_enabled column"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.708906306Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.523776ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.711704013Z level=info msg="Executing migration" id="add time_selection_enabled column"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.719028054Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.30563ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.721391499Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.721657326Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=267.227µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.723532587Z level=info msg="Executing migration" id="add share column"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.730336834Z level=info msg="Migration successfully executed" id="add share column" duration=6.802687ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.734075466Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.734377765Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=305.169µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.73711033Z level=info msg="Executing migration" id="create file table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.738180229Z level=info msg="Migration successfully executed" id="create file table" duration=1.069999ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.748698647Z level=info msg="Executing migration" id="file table idx: path natural pk"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.749988403Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.299036ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.752950804Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.753976922Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.026278ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.756591894Z level=info msg="Executing migration" id="create file_meta table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.757461738Z level=info msg="Migration successfully executed" id="create file_meta table" duration=870.184µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.761257312Z level=info msg="Executing migration" id="file table idx: path key"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.762332101Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.077029ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.766825864Z level=info msg="Executing migration" id="set path collation in file table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.766965668Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=144.794µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.770589218Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.770717931Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=134.784µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.772658644Z level=info msg="Executing migration" id="managed permissions migration"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.773292502Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=634.378µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.77504925Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.775243165Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=193.795µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.777080946Z level=info msg="Executing migration" id="RBAC action name migrator"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.778309479Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.229213ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.780466318Z level=info msg="Executing migration" id="Add UID column to playlist"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.78671263Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.245082ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.788526549Z level=info msg="Executing migration" id="Update uid column values in playlist"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.788671303Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=145.174µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.791177042Z level=info msg="Executing migration" id="Add index for uid in playlist"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.792318573Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.142261ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.795305315Z level=info msg="Executing migration" id="update group index for alert rules"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.79585891Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=560.865µs
Jan 23 09:53:58 compute-0 ceph-mon[74335]: osdmap e107: 3 total, 3 up, 3 in
Jan 23 09:53:58 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:58 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:58 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:58 compute-0 ceph-mon[74335]: Deploying daemon haproxy.rgw.default.compute-2.izjwnk on compute-2
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.798452042Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.798702268Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=251.407µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.801179726Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.801834914Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=659.878µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.804178729Z level=info msg="Executing migration" id="add action column to seed_assignment"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.81077862Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.600611ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.812807315Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.819578251Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.763906ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.821976667Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.823130198Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.155121ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.824990819Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.908791187Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=83.784618ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.91254952Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.91472614Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.17964ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.919420729Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.921395643Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.978435ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.92423239Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.954045978Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=29.803658ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.957677268Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.967141207Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.448609ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.970454778Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.971017783Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=568.525µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.974054307Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.974776986Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=730.98µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.977552963Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.977765668Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=212.825µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.980257037Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.98074468Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=495.394µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.983408893Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.983680771Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=272.888µs
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.987064483Z level=info msg="Executing migration" id="create folder table"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.988465482Z level=info msg="Migration successfully executed" id="create folder table" duration=1.393898ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.992166743Z level=info msg="Executing migration" id="Add index for parent_uid"
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.994026664Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.861521ms
Jan 23 09:53:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:58.999056692Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.000506052Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.45106ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.005826028Z level=info msg="Executing migration" id="Update folder title length"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.00590565Z level=info msg="Migration successfully executed" id="Update folder title length" duration=87.652µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.008697096Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.010502236Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.80552ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.015225545Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.01683876Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.616435ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.019454781Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.020710106Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.254935ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.024314945Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.025083496Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=781.632µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.029658341Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.030026451Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=370.82µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.032720375Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.0343381Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.623655ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.037443675Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.038936036Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.494881ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.04200214Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.043140951Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.141631ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.049141185Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.050468372Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.326257ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.057833154Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.058974665Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.146421ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.061899965Z level=info msg="Executing migration" id="create anon_device table"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.062662526Z level=info msg="Migration successfully executed" id="create anon_device table" duration=763.821µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.068670381Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.070222834Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.555513ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.074429419Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Jan 23 09:53:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 459 B/s rd, 0 op/s
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.075484938Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.056399ms
Jan 23 09:53:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 23 09:53:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.080795313Z level=info msg="Executing migration" id="create signing_key table"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.081938805Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.144362ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.086924412Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.088441013Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.519192ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.092465703Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.093865532Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.402419ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.096817783Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.097336607Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=524.664µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.100117353Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.107572798Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.454575ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.109960473Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.110633592Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=675.079µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.112877963Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.113919322Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.041259ms
Jan 23 09:53:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.117285674Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.118532038Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.246784ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.120583424Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.121634943Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.048329ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.124610695Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Jan 23 09:53:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 23 09:53:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.125789197Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.182432ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.129984192Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.131252187Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.271575ms
Jan 23 09:53:59 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.134824644Z level=info msg="Executing migration" id="create sso_setting table"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.135961205Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.141861ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.142200826Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.14307412Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=876.574µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.149889617Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.150140254Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=251.637µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.207768634Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.207922448Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=157.274µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.296970291Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Jan 23 09:53:59 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 108 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=66/66 les/c/f=67/67/0 sis=108) [1] r=0 lpr=108 pi=[66,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:59 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 108 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=108) [2]/[1] r=0 lpr=108 pi=[67,108)/1 crt=61'760 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:53:59 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 108 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=67/68 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=108) [2]/[1] r=0 lpr=108 pi=[67,108)/1 crt=61'760 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.308217719Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.241708ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.310609135Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.320019893Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.404138ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.323410346Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.32392405Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=515.604µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=migrator t=2026-01-23T09:53:59.326877501Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.962966805s
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=sqlstore t=2026-01-23T09:53:59.328338001Z level=info msg="Created default organization"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=secrets t=2026-01-23T09:53:59.331196479Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=plugin.store t=2026-01-23T09:53:59.375200126Z level=info msg="Loading plugins..."
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=local.finder t=2026-01-23T09:53:59.456218897Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=plugin.store t=2026-01-23T09:53:59.456396572Z level=info msg="Plugins loaded" count=55 duration=81.197706ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=query_data t=2026-01-23T09:53:59.45922721Z level=info msg="Query Service initialization"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=live.push_http t=2026-01-23T09:53:59.462943652Z level=info msg="Live Push Gateway initialization"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.migration t=2026-01-23T09:53:59.466044737Z level=info msg=Starting
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.migration t=2026-01-23T09:53:59.466511419Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.migration orgID=1 t=2026-01-23T09:53:59.466938131Z level=info msg="Migrating alerts for organisation"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.migration orgID=1 t=2026-01-23T09:53:59.468127444Z level=info msg="Alerts found to migrate" alerts=0
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.migration t=2026-01-23T09:53:59.47018033Z level=info msg="Completed alerting migration"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.state.manager t=2026-01-23T09:53:59.490842396Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:59 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=infra.usagestats.collector t=2026-01-23T09:53:59.493069428Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=provisioning.datasources t=2026-01-23T09:53:59.494313912Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=provisioning.alerting t=2026-01-23T09:53:59.504694276Z level=info msg="starting to provision alerting"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=provisioning.alerting t=2026-01-23T09:53:59.504792819Z level=info msg="finished to provision alerting"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.state.manager t=2026-01-23T09:53:59.505170219Z level=info msg="Warming state cache for startup"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.multiorg.alertmanager t=2026-01-23T09:53:59.505446847Z level=info msg="Starting MultiOrg Alertmanager"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.state.manager t=2026-01-23T09:53:59.505786266Z level=info msg="State cache has been initialized" states=0 duration=615.767µs
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ngalert.scheduler t=2026-01-23T09:53:59.505830677Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ticker t=2026-01-23T09:53:59.505895909Z level=info msg=starting first_tick=2026-01-23T09:54:00Z
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=grafanaStorageLogger t=2026-01-23T09:53:59.505986512Z level=info msg="Storage starting"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=http.server t=2026-01-23T09:53:59.507886834Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=provisioning.dashboard t=2026-01-23T09:53:59.508066799Z level=info msg="starting to provision dashboards"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=http.server t=2026-01-23T09:53:59.508189942Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=grafana.update.checker t=2026-01-23T09:53:59.575032445Z level=info msg="Update check succeeded" duration=69.563648ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=plugins.update.checker t=2026-01-23T09:53:59.576091024Z level=info msg="Update check succeeded" duration=70.435552ms
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=sqlstore.transactions t=2026-01-23T09:53:59.579998041Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=sqlstore.transactions t=2026-01-23T09:53:59.595115905Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=sqlstore.transactions t=2026-01-23T09:53:59.615088043Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:53:59 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:53:59 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 23 09:53:59 compute-0 ceph-mon[74335]: pgmap v149: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 459 B/s rd, 0 op/s
Jan 23 09:53:59 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 23 09:53:59 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 23 09:53:59 compute-0 ceph-mon[74335]: osdmap e108: 3 total, 3 up, 3 in
Jan 23 09:53:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:53:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:53:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:59.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:53:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:53:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=provisioning.dashboard t=2026-01-23T09:53:59.889470157Z level=info msg="finished to provision dashboards"
Jan 23 09:53:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:53:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 23 09:53:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Jan 23 09:53:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:53:59 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:59 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:53:59 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:59 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:53:59 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.tytkrd on compute-0
Jan 23 09:53:59 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.tytkrd on compute-0
Jan 23 09:53:59 compute-0 sudo[97824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:00 compute-0 sudo[97824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:00 compute-0 sudo[97824]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:00.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:00 compute-0 sudo[97849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:00 compute-0 sudo[97849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=grafana-apiserver t=2026-01-23T09:54:00.09054377Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 23 09:54:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=grafana-apiserver t=2026-01-23T09:54:00.091212238Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 23 09:54:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:00 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 23 09:54:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 23 09:54:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 23 09:54:00 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 109 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=66/66 les/c/f=67/67/0 sis=109) [1]/[2] r=-1 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:54:00 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 109 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=66/66 les/c/f=67/67/0 sis=109) [1]/[2] r=-1 lpr=109 pi=[66,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:54:00 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 109 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=108/109 n=4 ec=59/46 lis/c=67/67 les/c/f=68/68/0 sis=108) [2]/[1] async=[2] r=0 lpr=108 pi=[67,108)/1 crt=61'760 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:54:00 compute-0 podman[97916]: 2026-01-23 09:54:00.53474329 +0000 UTC m=+0.053871698 container create 95e74e288904f067541a3e3f6f7ca214d385ac7c16c5f526c3df23a0fa540826 (image=quay.io/ceph/keepalived:2.2.4, name=goofy_chaplygin, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-type=git, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Jan 23 09:54:00 compute-0 systemd[1]: Started libpod-conmon-95e74e288904f067541a3e3f6f7ca214d385ac7c16c5f526c3df23a0fa540826.scope.
Jan 23 09:54:00 compute-0 podman[97916]: 2026-01-23 09:54:00.51284539 +0000 UTC m=+0.031973818 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 23 09:54:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:00 compute-0 podman[97916]: 2026-01-23 09:54:00.625808987 +0000 UTC m=+0.144937425 container init 95e74e288904f067541a3e3f6f7ca214d385ac7c16c5f526c3df23a0fa540826 (image=quay.io/ceph/keepalived:2.2.4, name=goofy_chaplygin, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vcs-type=git, release=1793, description=keepalived for Ceph, build-date=2023-02-22T09:23:20)
Jan 23 09:54:00 compute-0 podman[97916]: 2026-01-23 09:54:00.634029443 +0000 UTC m=+0.153157851 container start 95e74e288904f067541a3e3f6f7ca214d385ac7c16c5f526c3df23a0fa540826 (image=quay.io/ceph/keepalived:2.2.4, name=goofy_chaplygin, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, release=1793, io.buildah.version=1.28.2, name=keepalived, distribution-scope=public, version=2.2.4, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20)
Jan 23 09:54:00 compute-0 goofy_chaplygin[97932]: 0 0
Jan 23 09:54:00 compute-0 systemd[1]: libpod-95e74e288904f067541a3e3f6f7ca214d385ac7c16c5f526c3df23a0fa540826.scope: Deactivated successfully.
Jan 23 09:54:00 compute-0 podman[97916]: 2026-01-23 09:54:00.64086036 +0000 UTC m=+0.159988788 container attach 95e74e288904f067541a3e3f6f7ca214d385ac7c16c5f526c3df23a0fa540826 (image=quay.io/ceph/keepalived:2.2.4, name=goofy_chaplygin, description=keepalived for Ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, version=2.2.4, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-type=git, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived)
Jan 23 09:54:00 compute-0 podman[97916]: 2026-01-23 09:54:00.641989621 +0000 UTC m=+0.161118029 container died 95e74e288904f067541a3e3f6f7ca214d385ac7c16c5f526c3df23a0fa540826 (image=quay.io/ceph/keepalived:2.2.4, name=goofy_chaplygin, vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 09:54:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-deae83cfdced489d058c12ace0502b5a3eaa2cd55ca5b8e44077814a5347fdc7-merged.mount: Deactivated successfully.
Jan 23 09:54:00 compute-0 podman[97916]: 2026-01-23 09:54:00.699436086 +0000 UTC m=+0.218564494 container remove 95e74e288904f067541a3e3f6f7ca214d385ac7c16c5f526c3df23a0fa540826 (image=quay.io/ceph/keepalived:2.2.4, name=goofy_chaplygin, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.28.2)
Jan 23 09:54:00 compute-0 systemd[1]: libpod-conmon-95e74e288904f067541a3e3f6f7ca214d385ac7c16c5f526c3df23a0fa540826.scope: Deactivated successfully.
Jan 23 09:54:00 compute-0 systemd[1]: Reloading.
Jan 23 09:54:00 compute-0 systemd-sysv-generator[97981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:54:00 compute-0 systemd-rc-local-generator[97975]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:54:00 compute-0 ceph-mon[74335]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 23 09:54:00 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:00 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:00 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:00 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:00 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:54:00 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:54:00 compute-0 ceph-mon[74335]: Deploying daemon keepalived.rgw.default.compute-0.tytkrd on compute-0
Jan 23 09:54:00 compute-0 ceph-mon[74335]: osdmap e109: 3 total, 3 up, 3 in
Jan 23 09:54:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095400 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:54:00 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 29 completed events
Jan 23 09:54:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:54:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 1 remapped+peering, 352 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 467 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 23 09:54:01 compute-0 systemd[1]: Reloading.
Jan 23 09:54:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 23 09:54:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 23 09:54:01 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 23 09:54:01 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 110 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=108/109 n=4 ec=59/46 lis/c=108/67 les/c/f=109/68/0 sis=110 pruub=15.016245842s) [2] async=[2] r=-1 lpr=110 pi=[67,110)/1 crt=61'760 mlcod 61'760 active pruub 274.296173096s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:54:01 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 110 pg[10.12( v 61'760 (0'0,61'760] local-lis/les=108/109 n=4 ec=59/46 lis/c=108/67 les/c/f=109/68/0 sis=110 pruub=15.015810013s) [2] r=-1 lpr=110 pi=[67,110)/1 crt=61'760 mlcod 0'0 unknown NOTIFY pruub 274.296173096s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 09:54:01 compute-0 systemd-sysv-generator[98023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:54:01 compute-0 systemd-rc-local-generator[98020]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:54:01 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.tytkrd for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:01 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:01 compute-0 podman[98078]: 2026-01-23 09:54:01.699019575 +0000 UTC m=+0.101754601 container create fb69e42829d45e3b674ff9bd3f3333c8c90dc07a3801eda65c7a0ef9a0f84b50 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., release=1793, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived)
Jan 23 09:54:01 compute-0 podman[98078]: 2026-01-23 09:54:01.619374081 +0000 UTC m=+0.022109127 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 23 09:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59938237d20c0ad1457de530a92e033783b0d28b99675d438d51d81eaf3fd06c/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:01 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:01.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:01 compute-0 podman[98078]: 2026-01-23 09:54:01.954826829 +0000 UTC m=+0.357561875 container init fb69e42829d45e3b674ff9bd3f3333c8c90dc07a3801eda65c7a0ef9a0f84b50 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 09:54:01 compute-0 podman[98078]: 2026-01-23 09:54:01.961747669 +0000 UTC m=+0.364482725 container start fb69e42829d45e3b674ff9bd3f3333c8c90dc07a3801eda65c7a0ef9a0f84b50 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vendor=Red Hat, Inc., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container)
Jan 23 09:54:01 compute-0 bash[98078]: fb69e42829d45e3b674ff9bd3f3333c8c90dc07a3801eda65c7a0ef9a0f84b50
Jan 23 09:54:01 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.tytkrd for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:54:01 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:01 compute-0 ceph-mon[74335]: pgmap v152: 353 pgs: 1 remapped+peering, 352 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 467 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 23 09:54:01 compute-0 ceph-mon[74335]: osdmap e110: 3 total, 3 up, 3 in
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: Failed to bind to process monitoring socket - errno 98 - Address already in use
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: Starting VRRP child process, pid=4
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: Startup complete
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: (VI_0) Entering BACKUP STATE (init)
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:54:01 2026: (VI_0) Entering BACKUP STATE
Jan 23 09:54:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:01 2026: VRRP_Script(check_backend) succeeded
Jan 23 09:54:02 compute-0 sudo[97849]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 09:54:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:02.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 09:54:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 23 09:54:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:02 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:54:02 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:54:02 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:54:02 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:54:02 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.qpmsjd on compute-2
Jan 23 09:54:02 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.qpmsjd on compute-2
Jan 23 09:54:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:02 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 23 09:54:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 23 09:54:02 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 23 09:54:02 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 111 pg[10.13( v 62'759 (0'0,62'759] local-lis/les=0/0 n=5 ec=59/46 lis/c=109/66 les/c/f=110/67/0 sis=111) [1] r=0 lpr=111 pi=[66,111)/1 luod=0'0 crt=62'759 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:54:02 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 111 pg[10.13( v 62'759 (0'0,62'759] local-lis/les=0/0 n=5 ec=59/46 lis/c=109/66 les/c/f=110/67/0 sis=111) [1] r=0 lpr=111 pi=[66,111)/1 crt=62'759 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:54:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc[96148]: Fri Jan 23 09:54:02 2026: (VI_0) Entering MASTER STATE
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:54:03 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:03 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:03 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:03 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 09:54:03 compute-0 ceph-mon[74335]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 09:54:03 compute-0 ceph-mon[74335]: Deploying daemon keepalived.rgw.default.compute-2.qpmsjd on compute-2
Jan 23 09:54:03 compute-0 ceph-mon[74335]: osdmap e111: 3 total, 3 up, 3 in
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 1 remapped+peering, 352 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:54:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 23 09:54:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 23 09:54:03 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 23 09:54:03 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 112 pg[10.13( v 62'759 (0'0,62'759] local-lis/les=111/112 n=5 ec=59/46 lis/c=109/66 les/c/f=110/67/0 sis=111) [1] r=0 lpr=111 pi=[66,111)/1 crt=62'759 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:54:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:54:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:03 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:03 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:03.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:54:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:54:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 23 09:54:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 4ad5b0fc-8efd-4184-90a7-cf60ba4b44f2 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 4ad5b0fc-8efd-4184-90a7-cf60ba4b44f2 (Updating ingress.rgw.default deployment (+4 -> 4)) in 8 seconds
Jan 23 09:54:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 23 09:54:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:03 compute-0 ceph-mgr[74633]: [progress INFO root] update: starting ev 09c0f19d-ee7e-4d02-a1db-80f7f56f63ef (Updating prometheus deployment (+1 -> 1))
Jan 23 09:54:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:04.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:04 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:04 compute-0 ceph-mon[74335]: pgmap v155: 353 pgs: 1 remapped+peering, 352 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 09:54:04 compute-0 ceph-mon[74335]: osdmap e112: 3 total, 3 up, 3 in
Jan 23 09:54:04 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:04 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:04 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:04 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:04 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Jan 23 09:54:04 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Jan 23 09:54:04 compute-0 sudo[98104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:04 compute-0 sudo[98104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:04 compute-0 sudo[98104]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:04 compute-0 sudo[98129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:04 compute-0 sudo[98129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Jan 23 09:54:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 23 09:54:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 23 09:54:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 23 09:54:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 23 09:54:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 23 09:54:05 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 23 09:54:05 compute-0 ceph-mon[74335]: Deploying daemon prometheus.compute-0 on compute-0
Jan 23 09:54:05 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 23 09:54:05 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 113 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=73/73 les/c/f=74/74/0 sis=113) [1] r=0 lpr=113 pi=[73,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:54:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:05 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6c98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-rgw-default-compute-0-tytkrd[98094]: Fri Jan 23 09:54:05 2026: (VI_0) Entering MASTER STATE
Jan 23 09:54:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:05 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:05.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:05 compute-0 ceph-mgr[74633]: [progress INFO root] Writing back 30 completed events
Jan 23 09:54:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 23 09:54:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:06.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:06 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 23 09:54:06 compute-0 ceph-mon[74335]: pgmap v157: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Jan 23 09:54:06 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 23 09:54:06 compute-0 ceph-mon[74335]: osdmap e113: 3 total, 3 up, 3 in
Jan 23 09:54:06 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 23 09:54:06 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 114 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=73/73 les/c/f=74/74/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[73,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:54:06 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 114 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=73/73 les/c/f=74/74/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[73,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:54:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 23 09:54:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 1 objects/s recovering
Jan 23 09:54:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 23 09:54:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 23 09:54:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 23 09:54:07 compute-0 ceph-mon[74335]: osdmap e114: 3 total, 3 up, 3 in
Jan 23 09:54:07 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 23 09:54:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 23 09:54:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 23 09:54:07 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 23 09:54:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:07 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca0002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:07 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:07.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:08.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:08 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 23 09:54:08 compute-0 ceph-mon[74335]: pgmap v160: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 1 objects/s recovering
Jan 23 09:54:08 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 23 09:54:08 compute-0 ceph-mon[74335]: osdmap e115: 3 total, 3 up, 3 in
Jan 23 09:54:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 23 09:54:08 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 23 09:54:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 116 pg[10.14( v 62'771 (0'0,62'771] local-lis/les=0/0 n=5 ec=59/46 lis/c=114/73 les/c/f=115/74/0 sis=116) [1] r=0 lpr=116 pi=[73,116)/1 luod=0'0 crt=62'771 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:54:08 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 116 pg[10.14( v 62'771 (0'0,62'771] local-lis/les=0/0 n=5 ec=59/46 lis/c=114/73 les/c/f=115/74/0 sis=116) [1] r=0 lpr=116 pi=[73,116)/1 crt=62'771 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:54:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 1 peering, 352 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s; 27 B/s, 2 objects/s recovering
Jan 23 09:54:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:09 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:54:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:09 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:09 compute-0 podman[98191]: 2026-01-23 09:54:09.557430247 +0000 UTC m=+4.912327946 volume create a32c712149efeec7cd1effd896a0ef37e58387f0cf3a8dd67e28a2a97dbccc52
Jan 23 09:54:09 compute-0 podman[98191]: 2026-01-23 09:54:09.567466335 +0000 UTC m=+4.922364034 container create 347892df9d50d7db0324fa3f9a5d2146e7405c4dc21c6d6448cf683fd46c8d89 (image=quay.io/prometheus/prometheus:v2.51.0, name=magical_tu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:09 compute-0 podman[98191]: 2026-01-23 09:54:09.537320122 +0000 UTC m=+4.892217841 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 23 09:54:09 compute-0 systemd[1]: Started libpod-conmon-347892df9d50d7db0324fa3f9a5d2146e7405c4dc21c6d6448cf683fd46c8d89.scope.
Jan 23 09:54:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd78ba016d81860f24f71b085009c63f2e904345d125cf05d93c2cc3d928336/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:09 compute-0 podman[98191]: 2026-01-23 09:54:09.663100557 +0000 UTC m=+5.017998276 container init 347892df9d50d7db0324fa3f9a5d2146e7405c4dc21c6d6448cf683fd46c8d89 (image=quay.io/prometheus/prometheus:v2.51.0, name=magical_tu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:09 compute-0 podman[98191]: 2026-01-23 09:54:09.674553773 +0000 UTC m=+5.029451472 container start 347892df9d50d7db0324fa3f9a5d2146e7405c4dc21c6d6448cf683fd46c8d89 (image=quay.io/prometheus/prometheus:v2.51.0, name=magical_tu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:09 compute-0 magical_tu[98449]: 65534 65534
Jan 23 09:54:09 compute-0 systemd[1]: libpod-347892df9d50d7db0324fa3f9a5d2146e7405c4dc21c6d6448cf683fd46c8d89.scope: Deactivated successfully.
Jan 23 09:54:09 compute-0 podman[98191]: 2026-01-23 09:54:09.679490809 +0000 UTC m=+5.034388558 container attach 347892df9d50d7db0324fa3f9a5d2146e7405c4dc21c6d6448cf683fd46c8d89 (image=quay.io/prometheus/prometheus:v2.51.0, name=magical_tu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:09 compute-0 podman[98191]: 2026-01-23 09:54:09.680045835 +0000 UTC m=+5.034943564 container died 347892df9d50d7db0324fa3f9a5d2146e7405c4dc21c6d6448cf683fd46c8d89 (image=quay.io/prometheus/prometheus:v2.51.0, name=magical_tu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbd78ba016d81860f24f71b085009c63f2e904345d125cf05d93c2cc3d928336-merged.mount: Deactivated successfully.
Jan 23 09:54:09 compute-0 podman[98191]: 2026-01-23 09:54:09.718060815 +0000 UTC m=+5.072958514 container remove 347892df9d50d7db0324fa3f9a5d2146e7405c4dc21c6d6448cf683fd46c8d89 (image=quay.io/prometheus/prometheus:v2.51.0, name=magical_tu, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:09 compute-0 podman[98191]: 2026-01-23 09:54:09.721546111 +0000 UTC m=+5.076443810 volume remove a32c712149efeec7cd1effd896a0ef37e58387f0cf3a8dd67e28a2a97dbccc52
Jan 23 09:54:09 compute-0 systemd[1]: libpod-conmon-347892df9d50d7db0324fa3f9a5d2146e7405c4dc21c6d6448cf683fd46c8d89.scope: Deactivated successfully.
Jan 23 09:54:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:09 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ca0002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:09 compute-0 podman[98468]: 2026-01-23 09:54:09.820179826 +0000 UTC m=+0.051978937 volume create 9693ba1c1f598dfb6f89e08bec950127c622c2329f7e2bf00d031da4f7510dd7
Jan 23 09:54:09 compute-0 podman[98468]: 2026-01-23 09:54:09.832405334 +0000 UTC m=+0.064204445 container create 13a7db2dde9cc6517f4349b6fac0c9f0cc73476a3a3e3b4127a9ff677e9d1096 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_banzai, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:09.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:09 compute-0 systemd[1]: Started libpod-conmon-13a7db2dde9cc6517f4349b6fac0c9f0cc73476a3a3e3b4127a9ff677e9d1096.scope.
Jan 23 09:54:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:09 compute-0 podman[98468]: 2026-01-23 09:54:09.799533596 +0000 UTC m=+0.031332737 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 23 09:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be688af07ecd01ee53bbd0aede5e978af657a336a147b226348556703d5b3d2a/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:10 compute-0 podman[98468]: 2026-01-23 09:54:10.010997637 +0000 UTC m=+0.242796768 container init 13a7db2dde9cc6517f4349b6fac0c9f0cc73476a3a3e3b4127a9ff677e9d1096 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_banzai, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 23 09:54:10 compute-0 podman[98468]: 2026-01-23 09:54:10.017592029 +0000 UTC m=+0.249391150 container start 13a7db2dde9cc6517f4349b6fac0c9f0cc73476a3a3e3b4127a9ff677e9d1096 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_banzai, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:10 compute-0 laughing_banzai[98485]: 65534 65534
Jan 23 09:54:10 compute-0 systemd[1]: libpod-13a7db2dde9cc6517f4349b6fac0c9f0cc73476a3a3e3b4127a9ff677e9d1096.scope: Deactivated successfully.
Jan 23 09:54:10 compute-0 podman[98468]: 2026-01-23 09:54:10.022462194 +0000 UTC m=+0.254261305 container attach 13a7db2dde9cc6517f4349b6fac0c9f0cc73476a3a3e3b4127a9ff677e9d1096 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_banzai, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:10 compute-0 podman[98468]: 2026-01-23 09:54:10.02413277 +0000 UTC m=+0.255931881 container died 13a7db2dde9cc6517f4349b6fac0c9f0cc73476a3a3e3b4127a9ff677e9d1096 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_banzai, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 23 09:54:10 compute-0 ceph-mon[74335]: osdmap e116: 3 total, 3 up, 3 in
Jan 23 09:54:10 compute-0 ceph-mon[74335]: pgmap v163: 353 pgs: 1 peering, 352 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s; 27 B/s, 2 objects/s recovering
Jan 23 09:54:10 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 23 09:54:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:10.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-be688af07ecd01ee53bbd0aede5e978af657a336a147b226348556703d5b3d2a-merged.mount: Deactivated successfully.
Jan 23 09:54:10 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 117 pg[10.14( v 62'771 (0'0,62'771] local-lis/les=116/117 n=5 ec=59/46 lis/c=114/73 les/c/f=115/74/0 sis=116) [1] r=0 lpr=116 pi=[73,116)/1 crt=62'771 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:54:10 compute-0 podman[98468]: 2026-01-23 09:54:10.06866274 +0000 UTC m=+0.300461851 container remove 13a7db2dde9cc6517f4349b6fac0c9f0cc73476a3a3e3b4127a9ff677e9d1096 (image=quay.io/prometheus/prometheus:v2.51.0, name=laughing_banzai, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:10 compute-0 podman[98468]: 2026-01-23 09:54:10.072309771 +0000 UTC m=+0.304108882 volume remove 9693ba1c1f598dfb6f89e08bec950127c622c2329f7e2bf00d031da4f7510dd7
Jan 23 09:54:10 compute-0 systemd[1]: libpod-conmon-13a7db2dde9cc6517f4349b6fac0c9f0cc73476a3a3e3b4127a9ff677e9d1096.scope: Deactivated successfully.
Jan 23 09:54:10 compute-0 systemd[1]: Reloading.
Jan 23 09:54:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:10 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cbc0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:10 compute-0 systemd-sysv-generator[98534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:54:10 compute-0 systemd-rc-local-generator[98528]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:54:10 compute-0 systemd[1]: Reloading.
Jan 23 09:54:10 compute-0 systemd-rc-local-generator[98570]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:54:10 compute-0 systemd-sysv-generator[98574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:54:10 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:54:10 compute-0 podman[98626]: 2026-01-23 09:54:10.984476002 +0000 UTC m=+0.047190065 container create 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec44db0bbc9902d0b0dffdc31672a0a6a0ef4f1d924df37071d5099fb7667628/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec44db0bbc9902d0b0dffdc31672a0a6a0ef4f1d924df37071d5099fb7667628/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:11 compute-0 podman[98626]: 2026-01-23 09:54:11.041825486 +0000 UTC m=+0.104539579 container init 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:11 compute-0 ceph-mon[74335]: osdmap e117: 3 total, 3 up, 3 in
Jan 23 09:54:11 compute-0 podman[98626]: 2026-01-23 09:54:11.04664781 +0000 UTC m=+0.109361883 container start 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:11 compute-0 bash[98626]: 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf
Jan 23 09:54:11 compute-0 podman[98626]: 2026-01-23 09:54:10.964325385 +0000 UTC m=+0.027039488 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 23 09:54:11 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:54:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 1 peering, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s; 0 B/s, 1 objects/s recovering
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.087Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.088Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.088Z caller=main.go:623 level=info host_details="(Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 x86_64 compute-0 (none))"
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.088Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.088Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.091Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.092Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.094Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.094Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.100Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.100Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.08µs
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.100Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.101Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.101Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=32.131µs wal_replay_duration=505.014µs wbl_replay_duration=160ns total_replay_duration=561.576µs
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.102Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.102Z caller=main.go:1153 level=info msg="TSDB started"
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.102Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Jan 23 09:54:11 compute-0 ceph-mgr[74633]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 23 09:54:11 compute-0 sudo[98129]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.127Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=24.654921ms db_storage=1.29µs remote_storage=2.35µs web_handler=540ns query_engine=13.501µs scrape=3.145806ms scrape_sd=138.834µs notify=18.321µs notify_sd=11.67µs rules=20.728453ms tracing=14.71µs
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.127Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0[98641]: ts=2026-01-23T09:54:11.127Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Jan 23 09:54:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 23 09:54:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:11 compute-0 ceph-mgr[74633]: [progress INFO root] complete: finished ev 09c0f19d-ee7e-4d02-a1db-80f7f56f63ef (Updating prometheus deployment (+1 -> 1))
Jan 23 09:54:11 compute-0 ceph-mgr[74633]: [progress INFO root] Completed event 09c0f19d-ee7e-4d02-a1db-80f7f56f63ef (Updating prometheus deployment (+1 -> 1)) in 7 seconds
Jan 23 09:54:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Jan 23 09:54:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:11 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cc800a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095411 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:54:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[95052]: 23/01/2026 09:54:11 : epoch 697344f8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6cac004050 fd 38 proxy ignored for local
Jan 23 09:54:11 compute-0 kernel: ganesha.nfsd[95181]: segfault at 50 ip 00007f6d4bac232e sp 00007f6cb97f9210 error 4 in libntirpc.so.5.8[7f6d4baa7000+2c000] likely on CPU 5 (core 0, socket 5)
Jan 23 09:54:11 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 09:54:11 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Jan 23 09:54:11 compute-0 systemd[1]: Started Process Core Dump (PID 98659/UID 0).
Jan 23 09:54:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:11.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:12.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:12 compute-0 sshd-session[98661]: Accepted publickey for zuul from 192.168.122.30 port 55958 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:54:12 compute-0 systemd-logind[784]: New session 38 of user zuul.
Jan 23 09:54:12 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 23 09:54:12 compute-0 sshd-session[98661]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:54:12 compute-0 ceph-mon[74335]: pgmap v165: 353 pgs: 1 peering, 352 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s; 0 B/s, 1 objects/s recovering
Jan 23 09:54:12 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:12 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:12 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:12 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 23 09:54:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  1: '-n'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  2: 'mgr.compute-0.nbdygh'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  3: '-f'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  4: '--setuser'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  5: 'ceph'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  6: '--setgroup'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  7: 'ceph'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  8: '--default-log-to-file=false'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  9: '--default-log-to-journald=true'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr respawn  exe_path /proc/self/exe
Jan 23 09:54:12 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.nbdygh(active, since 2m), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:12 compute-0 sshd-session[90040]: Connection closed by 192.168.122.100 port 57554
Jan 23 09:54:12 compute-0 sshd-session[90009]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 23 09:54:12 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 23 09:54:12 compute-0 systemd[1]: session-35.scope: Consumed 58.333s CPU time.
Jan 23 09:54:12 compute-0 systemd-logind[784]: Session 35 logged out. Waiting for processes to exit.
Jan 23 09:54:12 compute-0 systemd-logind[784]: Removed session 35.
Jan 23 09:54:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setuser ceph since I am not root
Jan 23 09:54:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ignoring --setgroup ceph since I am not root
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: pidfile_write: ignore empty --pid-file
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'alerts'
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'balancer'
Jan 23 09:54:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:12.531+0000 7f28c9150140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:54:12 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'cephadm'
Jan 23 09:54:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:12.629+0000 7f28c9150140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 09:54:12 compute-0 python3.9[98835]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 23 09:54:13 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'crash'
Jan 23 09:54:13 compute-0 ceph-mgr[74633]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:54:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:13.622+0000 7f28c9150140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 09:54:13 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'dashboard'
Jan 23 09:54:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:13.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:14.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'devicehealth'
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 09:54:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:14.444+0000 7f28c9150140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 09:54:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 09:54:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 09:54:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]:   from numpy import show_config as show_numpy_config
Jan 23 09:54:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:14.655+0000 7f28c9150140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'influx'
Jan 23 09:54:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:14.734+0000 7f28c9150140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'insights'
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'iostat'
Jan 23 09:54:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:14.889+0000 7f28c9150140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 09:54:14 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'k8sevents'
Jan 23 09:54:15 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'localpool'
Jan 23 09:54:15 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 09:54:15 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'mirroring'
Jan 23 09:54:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:15.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:15 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'nfs'
Jan 23 09:54:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:54:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:16.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:16 compute-0 python3.9[99021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:54:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:16.185+0000 7f28c9150140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'orchestrator'
Jan 23 09:54:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:16.429+0000 7f28c9150140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 09:54:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:16.527+0000 7f28c9150140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'osd_support'
Jan 23 09:54:16 compute-0 ceph-mon[74335]: from='mgr.14352 192.168.122.100:0/2738770404' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 23 09:54:16 compute-0 ceph-mon[74335]: mgrmap e26: compute-0.nbdygh(active, since 2m), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:16 compute-0 systemd-coredump[98660]: Process 95056 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 51:
                                                   #0  0x00007f6d4bac232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   #1  0x0000000000000000 n/a (n/a + 0x0)
                                                   #2  0x00007f6d4bacc900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)
                                                   ELF object binary architecture: AMD x86-64
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.587462) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162056587599, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 930, "num_deletes": 251, "total_data_size": 1493643, "memory_usage": 1513616, "flush_reason": "Manual Compaction"}
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162056601069, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1428848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8156, "largest_seqno": 9085, "table_properties": {"data_size": 1424071, "index_size": 2301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10217, "raw_average_key_size": 18, "raw_value_size": 1414138, "raw_average_value_size": 2566, "num_data_blocks": 102, "num_entries": 551, "num_filter_entries": 551, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162023, "oldest_key_time": 1769162023, "file_creation_time": 1769162056, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 13650 microseconds, and 6527 cpu microseconds.
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.601127) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1428848 bytes OK
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.601148) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.602975) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.602989) EVENT_LOG_v1 {"time_micros": 1769162056602985, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.603004) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1488958, prev total WAL file size 1488958, number of live WAL files 2.
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.603824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1395KB)], [20(10MB)]
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162056603983, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12879778, "oldest_snapshot_seqno": -1}
Jan 23 09:54:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:16.627+0000 7f28c9150140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 09:54:16 compute-0 systemd[1]: systemd-coredump@0-98659-0.service: Deactivated successfully.
Jan 23 09:54:16 compute-0 systemd[1]: systemd-coredump@0-98659-0.service: Consumed 1.424s CPU time.
Jan 23 09:54:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:16.732+0000 7f28c9150140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'progress'
Jan 23 09:54:16 compute-0 podman[99109]: 2026-01-23 09:54:16.739002655 +0000 UTC m=+0.032878799 container died bd89f1243d2eeec95b4e706e560db0d4f07fe842ddf566993f13eeee07fb7987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3709 keys, 12440705 bytes, temperature: kUnknown
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162056747151, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12440705, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12410046, "index_size": 20309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 94913, "raw_average_key_size": 25, "raw_value_size": 12336126, "raw_average_value_size": 3325, "num_data_blocks": 879, "num_entries": 3709, "num_filter_entries": 3709, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769162056, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.747753) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12440705 bytes
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.749736) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.7 rd, 86.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.9 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(17.7) write-amplify(8.7) OK, records in: 4236, records dropped: 527 output_compression: NoCompression
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.749787) EVENT_LOG_v1 {"time_micros": 1769162056749766, "job": 6, "event": "compaction_finished", "compaction_time_micros": 143566, "compaction_time_cpu_micros": 38215, "output_level": 6, "num_output_files": 1, "total_output_size": 12440705, "num_input_records": 4236, "num_output_records": 3709, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162056750169, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162056752693, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.603652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.752737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.752742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.752744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.752747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:54:16 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:54:16.752748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:54:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-13260c323821708d5d7a8166da641e5eea1cb9d0de01fd29d8822dc04af91ce0-merged.mount: Deactivated successfully.
Jan 23 09:54:16 compute-0 podman[99109]: 2026-01-23 09:54:16.803523618 +0000 UTC m=+0.097399742 container remove bd89f1243d2eeec95b4e706e560db0d4f07fe842ddf566993f13eeee07fb7987 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:16 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 09:54:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:16.823+0000 7f28c9150140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 09:54:16 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'prometheus'
Jan 23 09:54:16 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 09:54:16 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.837s CPU time.
Jan 23 09:54:17 compute-0 sudo[99223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raurlswlseaftbgwrgeqlfaekrdvnptj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162056.6214838-88-154065941473889/AnsiballZ_command.py'
Jan 23 09:54:17 compute-0 sudo[99223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:54:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:17.236+0000 7f28c9150140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:54:17 compute-0 ceph-mgr[74633]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 09:54:17 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rbd_support'
Jan 23 09:54:17 compute-0 python3.9[99225]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:54:17 compute-0 sudo[99223]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:17.357+0000 7f28c9150140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:54:17 compute-0 ceph-mgr[74633]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 09:54:17 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'restful'
Jan 23 09:54:17 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rgw'
Jan 23 09:54:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:17.872+0000 7f28c9150140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:54:17 compute-0 ceph-mgr[74633]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 09:54:17 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'rook'
Jan 23 09:54:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:18.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:18 compute-0 sudo[99378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhqpvbgnlpsuobsnzmewkhbzepgqjalc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162057.8121676-124-96798958794736/AnsiballZ_stat.py'
Jan 23 09:54:18 compute-0 sudo[99378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:54:18 compute-0 python3.9[99380]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:54:18 compute-0 sudo[99378]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:18.557+0000 7f28c9150140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'selftest'
Jan 23 09:54:18 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.jmakme restarted
Jan 23 09:54:18 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.jmakme started
Jan 23 09:54:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:18.641+0000 7f28c9150140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'snap_schedule'
Jan 23 09:54:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:18.733+0000 7f28c9150140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'stats'
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'status'
Jan 23 09:54:18 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.nbdygh(active, since 2m), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:18 compute-0 ceph-mon[74335]: Standby manager daemon compute-1.jmakme restarted
Jan 23 09:54:18 compute-0 ceph-mon[74335]: Standby manager daemon compute-1.jmakme started
Jan 23 09:54:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:18.907+0000 7f28c9150140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telegraf'
Jan 23 09:54:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:18.987+0000 7f28c9150140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 09:54:18 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'telemetry'
Jan 23 09:54:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:19.161+0000 7f28c9150140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 09:54:19 compute-0 sudo[99532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhooyqgihjpcwmidvunpettzpigznuns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162058.8462656-157-136877963556451/AnsiballZ_file.py'
Jan 23 09:54:19 compute-0 sudo[99532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:54:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:19.421+0000 7f28c9150140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'volumes'
Jan 23 09:54:19 compute-0 python3.9[99534]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:54:19 compute-0 sudo[99532]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:19.750+0000 7f28c9150140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr[py] Loading python module 'zabbix'
Jan 23 09:54:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:19.833+0000 7f28c9150140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Active manager daemon compute-0.nbdygh restarted
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.nbdygh
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: ms_deliver_dispatch: unhandled message 0x562aca043860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr handle_mgr_map Activating!
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.nbdygh(active, starting, since 0.0302434s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr handle_mgr_map I am now activating
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:54:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ymknms"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ymknms"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e9 all = 0
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.prgzmm"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.prgzmm"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e9 all = 0
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.bcvzvj"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bcvzvj"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e9 all = 0
Jan 23 09:54:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:19.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mgrmap e27: compute-0.nbdygh(active, since 2m), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:19 compute-0 ceph-mon[74335]: Active manager daemon compute-0.nbdygh restarted
Jan 23 09:54:19 compute-0 ceph-mon[74335]: Activating manager daemon compute-0.nbdygh
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.uczrot", "id": "compute-2.uczrot"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uczrot", "id": "compute-2.uczrot"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.jmakme", "id": "compute-1.jmakme"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jmakme", "id": "compute-1.jmakme"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).mds e9 all = 1
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Manager daemon compute-0.nbdygh is now available
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: balancer
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Starting
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:54:19
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: cephadm
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: crash
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: dashboard
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: devicehealth
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Starting
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [dashboard INFO sso] Loading SSO DB version=1
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: iostat
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: nfs
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: orchestrator
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: pg_autoscaler
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: progress
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 23 09:54:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: prometheus
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [prometheus INFO root] server_addr: :: server_port: 9283
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [prometheus INFO root] Cache enabled
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [progress INFO root] Loading...
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f2841c0db50>, <progress.module.GhostEvent object at 0x7f2841c0d310>, <progress.module.GhostEvent object at 0x7f2841c0d2e0>, <progress.module.GhostEvent object at 0x7f2841c0d2b0>, <progress.module.GhostEvent object at 0x7f2841c0d280>, <progress.module.GhostEvent object at 0x7f2841c0d250>, <progress.module.GhostEvent object at 0x7f2841c0d220>, <progress.module.GhostEvent object at 0x7f2841c0d1f0>, <progress.module.GhostEvent object at 0x7f2841c0d1c0>, <progress.module.GhostEvent object at 0x7f2841c0d190>, <progress.module.GhostEvent object at 0x7f2841c0d160>, <progress.module.GhostEvent object at 0x7f2841c0d130>, <progress.module.GhostEvent object at 0x7f2841c0d100>, <progress.module.GhostEvent object at 0x7f2841c0d0d0>, <progress.module.GhostEvent object at 0x7f2841c0d0a0>, <progress.module.GhostEvent object at 0x7f2841c0d070>, <progress.module.GhostEvent object at 0x7f2841c0d040>, <progress.module.GhostEvent object at 0x7f2841c0dbe0>, <progress.module.GhostEvent object at 0x7f2841c0dc10>, <progress.module.GhostEvent object at 0x7f2847c80fd0>, <progress.module.GhostEvent object at 0x7f2847c80fa0>, <progress.module.GhostEvent object at 0x7f2847c80f70>, <progress.module.GhostEvent object at 0x7f2847c80f40>, <progress.module.GhostEvent object at 0x7f2847c80f10>, <progress.module.GhostEvent object at 0x7f2847c80ee0>, <progress.module.GhostEvent object at 0x7f2847c80eb0>, <progress.module.GhostEvent object at 0x7f2847c80e80>, <progress.module.GhostEvent object at 0x7f2847c80e50>, <progress.module.GhostEvent object at 0x7f2847c80e20>, <progress.module.GhostEvent object at 0x7f2847c80df0>] historic events
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [progress INFO root] Loaded OSDMap, ready.
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [rbd_support INFO root] recovery thread starting
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [rbd_support INFO root] starting setup
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: rbd_support
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [prometheus INFO root] starting metric collection thread
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: restful
Jan 23 09:54:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [prometheus INFO root] Starting engine...
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.error] [23/Jan/2026:09:54:19] ENGINE Bus STARTING
Jan 23 09:54:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: [23/Jan/2026:09:54:19] ENGINE Bus STARTING
Jan 23 09:54:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: CherryPy Checker:
Jan 23 09:54:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: The Application mounted at '' has an empty config.
Jan 23 09:54:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: status
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: telemetry
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [restful INFO root] server_addr: :: server_port: 8003
Jan 23 09:54:19 compute-0 ceph-mgr[74633]: [restful WARNING root] server not running: no certificate configured
Jan 23 09:54:20 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:54:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"} v 0)
Jan 23 09:54:20 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:54:20 compute-0 sudo[99786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcjknhlsepielpbfveorkdxxkblsjjiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162059.7580776-184-181522678943179/AnsiballZ_file.py'
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:54:20 compute-0 sudo[99786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] PerfHandler: starting
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: mgr load Constructed class from module: volumes
Jan 23 09:54:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:20.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TaskHandler: starting
Jan 23 09:54:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"} v 0)
Jan 23 09:54:20 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:20.118+0000 7f2835366640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:54:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:20.120+0000 7f283099d640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:20.120+0000 7f283099d640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:20.120+0000 7f283099d640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:20.120+0000 7f283099d640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T09:54:20.120+0000 7f283099d640 -1 client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: client.0 error registering admin socket command: (17) File exists
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:54:20 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uczrot restarted
Jan 23 09:54:20 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.uczrot started
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:54:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: [23/Jan/2026:09:54:20] ENGINE Serving on http://:::9283
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.error] [23/Jan/2026:09:54:20] ENGINE Serving on http://:::9283
Jan 23 09:54:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: [23/Jan/2026:09:54:20] ENGINE Bus STARTED
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.error] [23/Jan/2026:09:54:20] ENGINE Bus STARTED
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [prometheus INFO root] Engine started.
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] setup complete
Jan 23 09:54:20 compute-0 sshd-session[99842]: Accepted publickey for ceph-admin from 192.168.122.100 port 36082 ssh2: RSA SHA256:KUDiO2K/X1wi9imZiH/VfiDaYgPU2ishZ01Sxv0ziUk
Jan 23 09:54:20 compute-0 python3.9[99800]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:54:20 compute-0 systemd-logind[784]: New session 39 of user ceph-admin.
Jan 23 09:54:20 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Jan 23 09:54:20 compute-0 sshd-session[99842]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 23 09:54:20 compute-0 sudo[99786]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 23 09:54:20 compute-0 sudo[99877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:20 compute-0 sudo[99877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:20 compute-0 sudo[99877]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:20 compute-0 sudo[99933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: [dashboard INFO dashboard.module] Engine started.
Jan 23 09:54:20 compute-0 sudo[99933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:20 compute-0 ceph-mon[74335]: osdmap e118: 3 total, 3 up, 3 in
Jan 23 09:54:20 compute-0 ceph-mon[74335]: mgrmap e28: compute-0.nbdygh(active, starting, since 0.0302434s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ymknms"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.prgzmm"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bcvzvj"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.nbdygh", "id": "compute-0.nbdygh"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uczrot", "id": "compute-2.uczrot"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jmakme", "id": "compute-1.jmakme"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: Manager daemon compute-0.nbdygh is now available
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/mirror_snapshot_schedule"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.nbdygh/trash_purge_schedule"}]: dispatch
Jan 23 09:54:20 compute-0 ceph-mon[74335]: Standby manager daemon compute-2.uczrot restarted
Jan 23 09:54:20 compute-0 ceph-mon[74335]: Standby manager daemon compute-2.uczrot started
Jan 23 09:54:20 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.nbdygh(active, since 1.08581s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:54:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:54:21] ENGINE Bus STARTING
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:54:21] ENGINE Bus STARTING
Jan 23 09:54:21 compute-0 python3.9[100101]: ansible-ansible.builtin.service_facts Invoked
Jan 23 09:54:21 compute-0 podman[100133]: 2026-01-23 09:54:21.208571667 +0000 UTC m=+0.115427130 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:54:21] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:54:21] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:54:21 compute-0 network[100180]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 09:54:21 compute-0 network[100181]: 'network-scripts' will be removed from distribution in near future.
Jan 23 09:54:21 compute-0 network[100182]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:54:21] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:54:21] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:54:21] ENGINE Bus STARTED
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:54:21] ENGINE Bus STARTED
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: [cephadm INFO cherrypy.error] [23/Jan/2026:09:54:21] ENGINE Client ('192.168.122.100', 52034) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : [23/Jan/2026:09:54:21] ENGINE Client ('192.168.122.100', 52034) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:54:21 compute-0 podman[100200]: 2026-01-23 09:54:21.386699628 +0000 UTC m=+0.062947150 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 09:54:21 compute-0 podman[100133]: 2026-01-23 09:54:21.462317767 +0000 UTC m=+0.369173210 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:54:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:21.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:54:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 23 09:54:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 23 09:54:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 23 09:54:22 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Check health
Jan 23 09:54:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 23 09:54:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 23 09:54:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:22.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:22 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 23 09:54:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095422 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:54:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [NOTICE] 022/095422 (4) : haproxy version is 2.3.17-d1c9119
Jan 23 09:54:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [NOTICE] 022/095422 (4) : path to executable is /usr/local/sbin/haproxy
Jan 23 09:54:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [ALERT] 022/095422 (4) : backend 'backend' has no server available!
Jan 23 09:54:22 compute-0 ceph-mon[74335]: mgrmap e29: compute-0.nbdygh(active, since 1.08581s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:22 compute-0 ceph-mon[74335]: [23/Jan/2026:09:54:21] ENGINE Bus STARTING
Jan 23 09:54:22 compute-0 ceph-mon[74335]: [23/Jan/2026:09:54:21] ENGINE Serving on http://192.168.122.100:8765
Jan 23 09:54:22 compute-0 ceph-mon[74335]: [23/Jan/2026:09:54:21] ENGINE Serving on https://192.168.122.100:7150
Jan 23 09:54:22 compute-0 ceph-mon[74335]: [23/Jan/2026:09:54:21] ENGINE Bus STARTED
Jan 23 09:54:22 compute-0 ceph-mon[74335]: [23/Jan/2026:09:54:21] ENGINE Client ('192.168.122.100', 52034) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 09:54:22 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 23 09:54:22 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.nbdygh(active, since 2s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:22 compute-0 podman[100331]: 2026-01-23 09:54:22.412760326 +0000 UTC m=+0.052859472 container exec 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:22 compute-0 podman[100331]: 2026-01-23 09:54:22.45274803 +0000 UTC m=+0.092847156 container exec_died 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:54:23 compute-0 ceph-mon[74335]: pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:54:23 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 23 09:54:23 compute-0 ceph-mon[74335]: osdmap e119: 3 total, 3 up, 3 in
Jan 23 09:54:23 compute-0 ceph-mon[74335]: mgrmap e30: compute-0.nbdygh(active, since 2s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:23 compute-0 podman[100500]: 2026-01-23 09:54:23.631453255 +0000 UTC m=+0.628805824 container exec 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 09:54:23 compute-0 podman[100500]: 2026-01-23 09:54:23.644538027 +0000 UTC m=+0.641890596 container exec_died 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 09:54:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:54:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:54:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:54:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:23 compute-0 podman[100607]: 2026-01-23 09:54:23.871043744 +0000 UTC m=+0.055418662 container exec 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., architecture=x86_64, description=keepalived for Ceph, name=keepalived, io.openshift.expose-services=, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9)
Jan 23 09:54:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:23.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:54:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 23 09:54:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 23 09:54:23 compute-0 podman[100607]: 2026-01-23 09:54:23.887814478 +0000 UTC m=+0.072189386 container exec_died 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-type=git, name=keepalived)
Jan 23 09:54:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:24.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:24 compute-0 podman[100700]: 2026-01-23 09:54:24.08876746 +0000 UTC m=+0.056312427 container exec c12cd358f71085f8f02219ac258799ba47dc04ec4aa13a22c98c7af3dc91dab0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:24 compute-0 podman[100700]: 2026-01-23 09:54:24.126995336 +0000 UTC m=+0.094540283 container exec_died c12cd358f71085f8f02219ac258799ba47dc04ec4aa13a22c98c7af3dc91dab0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:24 compute-0 podman[100828]: 2026-01-23 09:54:24.345144072 +0000 UTC m=+0.059484945 container exec a54bba5b68ea44a8d28033c77b2e521ac5290ce8b599976a9c0c4e403ef44f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:24 compute-0 podman[100828]: 2026-01-23 09:54:24.601768052 +0000 UTC m=+0.316108765 container exec_died a54bba5b68ea44a8d28033c77b2e521ac5290ce8b599976a9c0c4e403ef44f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 23 09:54:24 compute-0 python3.9[100938]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:54:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:54:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:54:24 compute-0 podman[101036]: 2026-01-23 09:54:24.970017935 +0000 UTC m=+0.054610319 container exec 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:24 compute-0 ceph-mon[74335]: pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:54:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 23 09:54:25 compute-0 podman[101036]: 2026-01-23 09:54:25.011746228 +0000 UTC m=+0.096338572 container exec_died 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:25 compute-0 sudo[99933]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 23 09:54:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.nbdygh(active, since 5s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:54:25 compute-0 python3.9[101204]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:25 compute-0 sudo[101209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:25 compute-0 sudo[101209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:25 compute-0 sudo[101209]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:25 compute-0 sudo[101235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 09:54:25 compute-0 sudo[101235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v8: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:54:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:25.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 23 09:54:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 23 09:54:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 09:54:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:26.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 23 09:54:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:26 compute-0 ceph-mon[74335]: osdmap e120: 3 total, 3 up, 3 in
Jan 23 09:54:26 compute-0 ceph-mon[74335]: mgrmap e31: compute-0.nbdygh(active, since 5s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:54:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 09:54:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 23 09:54:26 compute-0 sudo[101235]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:26 compute-0 sudo[101315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:26 compute-0 sudo[101315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:26 compute-0 sudo[101315]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:26 compute-0 sudo[101340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 23 09:54:26 compute-0 sudo[101340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 23 09:54:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 23 09:54:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 23 09:54:26 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 23 09:54:26 compute-0 sudo[101340]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 09:54:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:54:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095426 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:54:26 compute-0 python3.9[101507]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:54:27 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 1.
Jan 23 09:54:27 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:54:27 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.837s CPU time.
Jan 23 09:54:27 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:54:27 compute-0 ceph-mon[74335]: pgmap v8: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:54:27 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 23 09:54:27 compute-0 ceph-mon[74335]: osdmap e121: 3 total, 3 up, 3 in
Jan 23 09:54:27 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:27 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:27 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 09:54:27 compute-0 podman[101585]: 2026-01-23 09:54:27.508777186 +0000 UTC m=+0.110472344 container create 8b07b0a91308a280be87c57a40f5eda65a176fbdaf3393b6b42491735a49ec88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:54:27 compute-0 podman[101585]: 2026-01-23 09:54:27.42606731 +0000 UTC m=+0.027762458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8207052a3a812ab6a0f8a2480b8b48dd3ce3bb97f631979d30113cde1d081d4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8207052a3a812ab6a0f8a2480b8b48dd3ce3bb97f631979d30113cde1d081d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8207052a3a812ab6a0f8a2480b8b48dd3ce3bb97f631979d30113cde1d081d4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8207052a3a812ab6a0f8a2480b8b48dd3ce3bb97f631979d30113cde1d081d4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:27 compute-0 podman[101585]: 2026-01-23 09:54:27.670829393 +0000 UTC m=+0.272524551 container init 8b07b0a91308a280be87c57a40f5eda65a176fbdaf3393b6b42491735a49ec88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 09:54:27 compute-0 podman[101585]: 2026-01-23 09:54:27.676096878 +0000 UTC m=+0.277792026 container start 8b07b0a91308a280be87c57a40f5eda65a176fbdaf3393b6b42491735a49ec88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 09:54:27 compute-0 bash[101585]: 8b07b0a91308a280be87c57a40f5eda65a176fbdaf3393b6b42491735a49ec88
Jan 23 09:54:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 09:54:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 09:54:27 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:54:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:54:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 09:54:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 09:54:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 09:54:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 09:54:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:54:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 09:54:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 23 09:54:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 09:54:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:27 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:54:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:54:27 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:54:27 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:54:27 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:54:27 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:54:27 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:54:27 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:54:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:54:27 compute-0 sudo[101666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 09:54:27 compute-0 sudo[101666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:27 compute-0 sudo[101666]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v10: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 682 B/s wr, 14 op/s
Jan 23 09:54:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 23 09:54:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 23 09:54:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:27.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:27 compute-0 sudo[101720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph
Jan 23 09:54:27 compute-0 sudo[101720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:27 compute-0 sudo[101720]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:27 compute-0 sudo[101768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:54:27 compute-0 sudo[101768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:27 compute-0 sudo[101768]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[101809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:28 compute-0 sudo[101809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[101809]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:28.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:28 compute-0 sudo[101876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xziuhnsyaynykgxdbxytatjedffhmldx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162067.7831655-328-168949129839617/AnsiballZ_setup.py'
Jan 23 09:54:28 compute-0 sudo[101876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:54:28 compute-0 sudo[101863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:54:28 compute-0 sudo[101863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[101863]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[101920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:54:28 compute-0 sudo[101920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[101920]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[101945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new
Jan 23 09:54:28 compute-0 sudo[101945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[101945]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[101970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 23 09:54:28 compute-0 sudo[101970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[101970]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:28 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:28 compute-0 python3.9[101892]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:54:28 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:28 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:28 compute-0 sudo[101995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:54:28 compute-0 sudo[101995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[101995]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:28 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:28 compute-0 sudo[102024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:54:28 compute-0 sudo[102024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[102024]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[102049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:54:28 compute-0 sudo[102049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[102049]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[102074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:28 compute-0 sudo[102074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[102074]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[101876]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[102103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:54:28 compute-0 sudo[102103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[102103]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 09:54:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:54:28 compute-0 ceph-mon[74335]: Updating compute-0:/etc/ceph/ceph.conf
Jan 23 09:54:28 compute-0 ceph-mon[74335]: Updating compute-1:/etc/ceph/ceph.conf
Jan 23 09:54:28 compute-0 ceph-mon[74335]: Updating compute-2:/etc/ceph/ceph.conf
Jan 23 09:54:28 compute-0 ceph-mon[74335]: pgmap v10: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 682 B/s wr, 14 op/s
Jan 23 09:54:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 23 09:54:28 compute-0 ceph-mon[74335]: Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 23 09:54:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 23 09:54:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 23 09:54:28 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 23 09:54:28 compute-0 sudo[102151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:54:28 compute-0 sudo[102151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[102151]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[102176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new
Jan 23 09:54:28 compute-0 sudo[102176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[102176]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 sudo[102201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf.new /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:28 compute-0 sudo[102201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:28 compute-0 sudo[102201]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:28 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:28 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 sudo[102249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 23 09:54:29 compute-0 sudo[102249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102249]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 sudo[102298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph
Jan 23 09:54:29 compute-0 sudo[102298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102298]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 sudo[102350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btzcyrdcvbvnmwwnarfuxzzhzwezhnpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162067.7831655-328-168949129839617/AnsiballZ_dnf.py'
Jan 23 09:54:29 compute-0 sudo[102350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:54:29 compute-0 sudo[102349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:54:29 compute-0 sudo[102349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102349]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 sudo[102377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:29 compute-0 sudo[102377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102377]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 sudo[102402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:54:29 compute-0 sudo[102402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102402]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 122 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=89/89 les/c/f=90/90/0 sis=122) [1] r=0 lpr=122 pi=[89,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:54:29 compute-0 python3.9[102364]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:54:29 compute-0 sudo[102450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:54:29 compute-0 sudo[102450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102450]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 sudo[102476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new
Jan 23 09:54:29 compute-0 sudo[102476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102476]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 sudo[102501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 sudo[102501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102501]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 sudo[102526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:54:29 compute-0 sudo[102526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102526]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 sudo[102551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config
Jan 23 09:54:29 compute-0 sudo[102551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102551]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 sudo[102579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:54:29 compute-0 sudo[102579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102579]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 sudo[102605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:29 compute-0 sudo[102605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102605]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 ceph-mon[74335]: Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:29 compute-0 ceph-mon[74335]: Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.conf
Jan 23 09:54:29 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 23 09:54:29 compute-0 ceph-mon[74335]: osdmap e122: 3 total, 3 up, 3 in
Jan 23 09:54:29 compute-0 ceph-mon[74335]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 ceph-mon[74335]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 23 09:54:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 23 09:54:29 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 23 09:54:29 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 123 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=89/89 les/c/f=90/90/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[89,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:54:29 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 123 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=89/89 les/c/f=90/90/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[89,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:54:29 compute-0 sudo[102633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:54:29 compute-0 sudo[102633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102633]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v13: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 880 B/s wr, 18 op/s
Jan 23 09:54:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 23 09:54:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 23 09:54:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:29.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:29 compute-0 sudo[102684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:54:29 compute-0 sudo[102684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:29 compute-0 sudo[102684]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:54:29] "GET /metrics HTTP/1.1" 200 46658 "" "Prometheus/2.51.0"
Jan 23 09:54:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:54:29] "GET /metrics HTTP/1.1" 200 46658 "" "Prometheus/2.51.0"
Jan 23 09:54:30 compute-0 sudo[102712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new
Jan 23 09:54:30 compute-0 sudo[102712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:30 compute-0 sudo[102712]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:30.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:30 compute-0 sudo[102738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f3005f84-239a-55b6-a948-8f1fb592b920/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring.new /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:30 compute-0 sudo[102738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:30 compute-0 sudo[102738]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:30 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 836 B/s wr, 17 op/s
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 23 09:54:30 compute-0 ceph-mon[74335]: Updating compute-0:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:30 compute-0 ceph-mon[74335]: Updating compute-1:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:30 compute-0 ceph-mon[74335]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 09:54:30 compute-0 ceph-mon[74335]: osdmap e123: 3 total, 3 up, 3 in
Jan 23 09:54:30 compute-0 ceph-mon[74335]: pgmap v13: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 880 B/s wr, 18 op/s
Jan 23 09:54:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 23 09:54:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: Updating compute-2:/var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/config/ceph.client.admin.keyring
Jan 23 09:54:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 23 09:54:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:30 compute-0 sudo[102784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:30 compute-0 sudo[102784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:30 compute-0 sudo[102784]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 23 09:54:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 23 09:54:30 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 23 09:54:30 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 125 pg[10.19( v 62'771 (0'0,62'771] local-lis/les=0/0 n=7 ec=59/46 lis/c=123/89 les/c/f=124/90/0 sis=125) [1] r=0 lpr=125 pi=[89,125)/1 luod=0'0 crt=62'771 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:54:30 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 125 pg[10.19( v 62'771 (0'0,62'771] local-lis/les=0/0 n=7 ec=59/46 lis/c=123/89 les/c/f=124/90/0 sis=125) [1] r=0 lpr=125 pi=[89,125)/1 crt=62'771 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:54:30 compute-0 sudo[102812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 09:54:30 compute-0 sudo[102812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:31 compute-0 podman[102889]: 2026-01-23 09:54:31.394463135 +0000 UTC m=+0.045190619 container create 8a1d4d6acc1249236f13d2424e493d857d8bb226fb8204a9320f1a3659ca69df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:54:31 compute-0 systemd[1]: Started libpod-conmon-8a1d4d6acc1249236f13d2424e493d857d8bb226fb8204a9320f1a3659ca69df.scope.
Jan 23 09:54:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:31 compute-0 podman[102889]: 2026-01-23 09:54:31.373153827 +0000 UTC m=+0.023881341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:31 compute-0 podman[102889]: 2026-01-23 09:54:31.477426567 +0000 UTC m=+0.128154071 container init 8a1d4d6acc1249236f13d2424e493d857d8bb226fb8204a9320f1a3659ca69df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 09:54:31 compute-0 podman[102889]: 2026-01-23 09:54:31.48546931 +0000 UTC m=+0.136196794 container start 8a1d4d6acc1249236f13d2424e493d857d8bb226fb8204a9320f1a3659ca69df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:54:31 compute-0 frosty_brahmagupta[102906]: 167 167
Jan 23 09:54:31 compute-0 systemd[1]: libpod-8a1d4d6acc1249236f13d2424e493d857d8bb226fb8204a9320f1a3659ca69df.scope: Deactivated successfully.
Jan 23 09:54:31 compute-0 podman[102889]: 2026-01-23 09:54:31.508614249 +0000 UTC m=+0.159341743 container attach 8a1d4d6acc1249236f13d2424e493d857d8bb226fb8204a9320f1a3659ca69df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brahmagupta, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 09:54:31 compute-0 podman[102889]: 2026-01-23 09:54:31.511891169 +0000 UTC m=+0.162618663 container died 8a1d4d6acc1249236f13d2424e493d857d8bb226fb8204a9320f1a3659ca69df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brahmagupta, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:54:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb44c1e0aeec677db524ddb6ec00ed1c989f71ec3d37cbce33301be44bb72600-merged.mount: Deactivated successfully.
Jan 23 09:54:31 compute-0 podman[102889]: 2026-01-23 09:54:31.562261891 +0000 UTC m=+0.212989375 container remove 8a1d4d6acc1249236f13d2424e493d857d8bb226fb8204a9320f1a3659ca69df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 09:54:31 compute-0 systemd[1]: libpod-conmon-8a1d4d6acc1249236f13d2424e493d857d8bb226fb8204a9320f1a3659ca69df.scope: Deactivated successfully.
Jan 23 09:54:31 compute-0 podman[102940]: 2026-01-23 09:54:31.725317656 +0000 UTC m=+0.051762501 container create 85100ce58c7eca571320dc3bd00caf004fc5c9f2e736934a988a85cce1334fab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 23 09:54:31 compute-0 systemd[1]: Started libpod-conmon-85100ce58c7eca571320dc3bd00caf004fc5c9f2e736934a988a85cce1334fab.scope.
Jan 23 09:54:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bae55d59e6a5ae696ac5a35718c14f3f624bc26159fb023e75ca3980d4af17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bae55d59e6a5ae696ac5a35718c14f3f624bc26159fb023e75ca3980d4af17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bae55d59e6a5ae696ac5a35718c14f3f624bc26159fb023e75ca3980d4af17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bae55d59e6a5ae696ac5a35718c14f3f624bc26159fb023e75ca3980d4af17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:31 compute-0 podman[102940]: 2026-01-23 09:54:31.70482158 +0000 UTC m=+0.031266455 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bae55d59e6a5ae696ac5a35718c14f3f624bc26159fb023e75ca3980d4af17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:31 compute-0 podman[102940]: 2026-01-23 09:54:31.810942432 +0000 UTC m=+0.137387297 container init 85100ce58c7eca571320dc3bd00caf004fc5c9f2e736934a988a85cce1334fab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 09:54:31 compute-0 podman[102940]: 2026-01-23 09:54:31.817895154 +0000 UTC m=+0.144339999 container start 85100ce58c7eca571320dc3bd00caf004fc5c9f2e736934a988a85cce1334fab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 09:54:31 compute-0 podman[102940]: 2026-01-23 09:54:31.821328759 +0000 UTC m=+0.147773604 container attach 85100ce58c7eca571320dc3bd00caf004fc5c9f2e736934a988a85cce1334fab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 09:54:31 compute-0 ceph-mon[74335]: pgmap v14: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 836 B/s wr, 17 op/s
Jan 23 09:54:31 compute-0 ceph-mon[74335]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 23 09:54:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 09:54:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 09:54:31 compute-0 ceph-mon[74335]: osdmap e124: 3 total, 3 up, 3 in
Jan 23 09:54:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:54:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:54:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:31 compute-0 ceph-mon[74335]: osdmap e125: 3 total, 3 up, 3 in
Jan 23 09:54:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:31.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 23 09:54:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 23 09:54:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 23 09:54:31 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 126 pg[10.19( v 62'771 (0'0,62'771] local-lis/les=125/126 n=7 ec=59/46 lis/c=123/89 les/c/f=124/90/0 sis=125) [1] r=0 lpr=125 pi=[89,125)/1 crt=62'771 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:54:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:32 compute-0 ecstatic_swanson[102957]: --> passed data devices: 0 physical, 1 LVM
Jan 23 09:54:32 compute-0 ecstatic_swanson[102957]: --> All data devices are unavailable
Jan 23 09:54:32 compute-0 systemd[1]: libpod-85100ce58c7eca571320dc3bd00caf004fc5c9f2e736934a988a85cce1334fab.scope: Deactivated successfully.
Jan 23 09:54:32 compute-0 podman[102940]: 2026-01-23 09:54:32.209004349 +0000 UTC m=+0.535449194 container died 85100ce58c7eca571320dc3bd00caf004fc5c9f2e736934a988a85cce1334fab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:54:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-93bae55d59e6a5ae696ac5a35718c14f3f624bc26159fb023e75ca3980d4af17-merged.mount: Deactivated successfully.
Jan 23 09:54:32 compute-0 podman[102940]: 2026-01-23 09:54:32.2521059 +0000 UTC m=+0.578550745 container remove 85100ce58c7eca571320dc3bd00caf004fc5c9f2e736934a988a85cce1334fab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_swanson, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 09:54:32 compute-0 systemd[1]: libpod-conmon-85100ce58c7eca571320dc3bd00caf004fc5c9f2e736934a988a85cce1334fab.scope: Deactivated successfully.
Jan 23 09:54:32 compute-0 sudo[102812]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:32 compute-0 sudo[102990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:32 compute-0 sudo[102990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:32 compute-0 sudo[102990]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:32 compute-0 sudo[103015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 09:54:32 compute-0 sudo[103015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v18: 353 pgs: 1 active+recovering+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 3.0 KiB/s wr, 10 op/s; 1/227 objects misplaced (0.441%); 36 B/s, 2 objects/s recovering
Jan 23 09:54:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 23 09:54:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 23 09:54:32 compute-0 podman[103086]: 2026-01-23 09:54:32.819983999 +0000 UTC m=+0.044136121 container create 33d79ae2d18f004cd6dcd678bc748f0e195962330509a8c5210a3b64552979f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 09:54:32 compute-0 systemd[1]: Started libpod-conmon-33d79ae2d18f004cd6dcd678bc748f0e195962330509a8c5210a3b64552979f3.scope.
Jan 23 09:54:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:32 compute-0 podman[103086]: 2026-01-23 09:54:32.797157988 +0000 UTC m=+0.021310140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:32 compute-0 podman[103086]: 2026-01-23 09:54:32.909608445 +0000 UTC m=+0.133760587 container init 33d79ae2d18f004cd6dcd678bc748f0e195962330509a8c5210a3b64552979f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:54:32 compute-0 podman[103086]: 2026-01-23 09:54:32.916435344 +0000 UTC m=+0.140587466 container start 33d79ae2d18f004cd6dcd678bc748f0e195962330509a8c5210a3b64552979f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:54:32 compute-0 podman[103086]: 2026-01-23 09:54:32.920566458 +0000 UTC m=+0.144718580 container attach 33d79ae2d18f004cd6dcd678bc748f0e195962330509a8c5210a3b64552979f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 09:54:32 compute-0 serene_lalande[103102]: 167 167
Jan 23 09:54:32 compute-0 systemd[1]: libpod-33d79ae2d18f004cd6dcd678bc748f0e195962330509a8c5210a3b64552979f3.scope: Deactivated successfully.
Jan 23 09:54:32 compute-0 podman[103086]: 2026-01-23 09:54:32.923257022 +0000 UTC m=+0.147409144 container died 33d79ae2d18f004cd6dcd678bc748f0e195962330509a8c5210a3b64552979f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:54:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d852e74d120e33cc2833983d84bc6ab708d0ea4e8dbb8b3f15a1293f58ffbcdf-merged.mount: Deactivated successfully.
Jan 23 09:54:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 23 09:54:32 compute-0 podman[103086]: 2026-01-23 09:54:32.979213348 +0000 UTC m=+0.203365470 container remove 33d79ae2d18f004cd6dcd678bc748f0e195962330509a8c5210a3b64552979f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:54:32 compute-0 ceph-mon[74335]: osdmap e126: 3 total, 3 up, 3 in
Jan 23 09:54:32 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 23 09:54:32 compute-0 systemd[1]: libpod-conmon-33d79ae2d18f004cd6dcd678bc748f0e195962330509a8c5210a3b64552979f3.scope: Deactivated successfully.
Jan 23 09:54:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 23 09:54:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 23 09:54:33 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 23 09:54:33 compute-0 podman[103128]: 2026-01-23 09:54:33.138802947 +0000 UTC m=+0.041923309 container create 0baecf4ce9116161e265f371dccb2fcb02df5702c9b9f718831c1427d6b1b1eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_booth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 09:54:33 compute-0 systemd[1]: Started libpod-conmon-0baecf4ce9116161e265f371dccb2fcb02df5702c9b9f718831c1427d6b1b1eb.scope.
Jan 23 09:54:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa206d49f2008c2b3b4c39785b4b21fcc398213e109fb3d8e02eb0aa477ecba4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa206d49f2008c2b3b4c39785b4b21fcc398213e109fb3d8e02eb0aa477ecba4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa206d49f2008c2b3b4c39785b4b21fcc398213e109fb3d8e02eb0aa477ecba4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa206d49f2008c2b3b4c39785b4b21fcc398213e109fb3d8e02eb0aa477ecba4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:33 compute-0 podman[103128]: 2026-01-23 09:54:33.122316022 +0000 UTC m=+0.025436404 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:33 compute-0 podman[103128]: 2026-01-23 09:54:33.230520361 +0000 UTC m=+0.133640753 container init 0baecf4ce9116161e265f371dccb2fcb02df5702c9b9f718831c1427d6b1b1eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_booth, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 09:54:33 compute-0 podman[103128]: 2026-01-23 09:54:33.238139981 +0000 UTC m=+0.141260343 container start 0baecf4ce9116161e265f371dccb2fcb02df5702c9b9f718831c1427d6b1b1eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_booth, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:54:33 compute-0 podman[103128]: 2026-01-23 09:54:33.242960555 +0000 UTC m=+0.146080917 container attach 0baecf4ce9116161e265f371dccb2fcb02df5702c9b9f718831c1427d6b1b1eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_booth, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:54:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095433 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:54:33 compute-0 gallant_booth[103144]: {
Jan 23 09:54:33 compute-0 gallant_booth[103144]:     "1": [
Jan 23 09:54:33 compute-0 gallant_booth[103144]:         {
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "devices": [
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "/dev/loop3"
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             ],
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "lv_name": "ceph_lv0",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "lv_size": "21470642176",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "name": "ceph_lv0",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "tags": {
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.cluster_name": "ceph",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.crush_device_class": "",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.encrypted": "0",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.osd_id": "1",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.type": "block",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.vdo": "0",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:                 "ceph.with_tpm": "0"
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             },
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "type": "block",
Jan 23 09:54:33 compute-0 gallant_booth[103144]:             "vg_name": "ceph_vg0"
Jan 23 09:54:33 compute-0 gallant_booth[103144]:         }
Jan 23 09:54:33 compute-0 gallant_booth[103144]:     ]
Jan 23 09:54:33 compute-0 gallant_booth[103144]: }
Jan 23 09:54:33 compute-0 systemd[1]: libpod-0baecf4ce9116161e265f371dccb2fcb02df5702c9b9f718831c1427d6b1b1eb.scope: Deactivated successfully.
Jan 23 09:54:33 compute-0 podman[103128]: 2026-01-23 09:54:33.571915883 +0000 UTC m=+0.475036245 container died 0baecf4ce9116161e265f371dccb2fcb02df5702c9b9f718831c1427d6b1b1eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_booth, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 09:54:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa206d49f2008c2b3b4c39785b4b21fcc398213e109fb3d8e02eb0aa477ecba4-merged.mount: Deactivated successfully.
Jan 23 09:54:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:33 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 23 09:54:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:33 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 23 09:54:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:33 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:54:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:33 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:54:33 compute-0 podman[103128]: 2026-01-23 09:54:33.924023711 +0000 UTC m=+0.827144073 container remove 0baecf4ce9116161e265f371dccb2fcb02df5702c9b9f718831c1427d6b1b1eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_booth, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:54:33 compute-0 systemd[1]: libpod-conmon-0baecf4ce9116161e265f371dccb2fcb02df5702c9b9f718831c1427d6b1b1eb.scope: Deactivated successfully.
Jan 23 09:54:33 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 127 pg[10.1b( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=92/92 les/c/f=93/93/0 sis=127) [1] r=0 lpr=127 pi=[92,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:54:33 compute-0 sudo[103015]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:34 compute-0 sudo[103169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:34 compute-0 sudo[103169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:34 compute-0 sudo[103169]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:34 compute-0 sudo[103195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 09:54:34 compute-0 sudo[103195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 23 09:54:34 compute-0 ceph-mon[74335]: pgmap v18: 353 pgs: 1 active+recovering+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 3.0 KiB/s wr, 10 op/s; 1/227 objects misplaced (0.441%); 36 B/s, 2 objects/s recovering
Jan 23 09:54:34 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 23 09:54:34 compute-0 ceph-mon[74335]: osdmap e127: 3 total, 3 up, 3 in
Jan 23 09:54:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 23 09:54:34 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 23 09:54:34 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 128 pg[10.1b( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=92/92 les/c/f=93/93/0 sis=128) [1]/[0] r=-1 lpr=128 pi=[92,128)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:54:34 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 128 pg[10.1b( empty local-lis/les=0/0 n=0 ec=59/46 lis/c=92/92 les/c/f=93/93/0 sis=128) [1]/[0] r=-1 lpr=128 pi=[92,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 09:54:34 compute-0 podman[103263]: 2026-01-23 09:54:34.560894806 +0000 UTC m=+0.080381512 container create 9a3626583c2f934c7a570128797bfe0f2b5d0e6dd1e323ab3a947d714975c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:54:34 compute-0 podman[103263]: 2026-01-23 09:54:34.501276469 +0000 UTC m=+0.020763195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:34 compute-0 systemd[1]: Started libpod-conmon-9a3626583c2f934c7a570128797bfe0f2b5d0e6dd1e323ab3a947d714975c0fe.scope.
Jan 23 09:54:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:34 compute-0 podman[103263]: 2026-01-23 09:54:34.651448988 +0000 UTC m=+0.170935704 container init 9a3626583c2f934c7a570128797bfe0f2b5d0e6dd1e323ab3a947d714975c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:34 compute-0 podman[103263]: 2026-01-23 09:54:34.658834232 +0000 UTC m=+0.178320928 container start 9a3626583c2f934c7a570128797bfe0f2b5d0e6dd1e323ab3a947d714975c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 09:54:34 compute-0 podman[103263]: 2026-01-23 09:54:34.664054656 +0000 UTC m=+0.183541372 container attach 9a3626583c2f934c7a570128797bfe0f2b5d0e6dd1e323ab3a947d714975c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 09:54:34 compute-0 festive_kare[103280]: 167 167
Jan 23 09:54:34 compute-0 systemd[1]: libpod-9a3626583c2f934c7a570128797bfe0f2b5d0e6dd1e323ab3a947d714975c0fe.scope: Deactivated successfully.
Jan 23 09:54:34 compute-0 podman[103263]: 2026-01-23 09:54:34.666338909 +0000 UTC m=+0.185825615 container died 9a3626583c2f934c7a570128797bfe0f2b5d0e6dd1e323ab3a947d714975c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d8ce92dea21de8a9bd6e09c1f6c7ca29f3aeb76e6f4d3a28e10a2ad3ddc907a-merged.mount: Deactivated successfully.
Jan 23 09:54:34 compute-0 podman[103263]: 2026-01-23 09:54:34.715055505 +0000 UTC m=+0.234542211 container remove 9a3626583c2f934c7a570128797bfe0f2b5d0e6dd1e323ab3a947d714975c0fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:34 compute-0 systemd[1]: libpod-conmon-9a3626583c2f934c7a570128797bfe0f2b5d0e6dd1e323ab3a947d714975c0fe.scope: Deactivated successfully.
Jan 23 09:54:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v21: 353 pgs: 1 active+recovering+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 2.3 KiB/s wr, 7 op/s; 1/227 objects misplaced (0.441%); 27 B/s, 1 objects/s recovering
Jan 23 09:54:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 23 09:54:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 23 09:54:34 compute-0 podman[103306]: 2026-01-23 09:54:34.87630156 +0000 UTC m=+0.047420911 container create 3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kilby, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:54:34 compute-0 systemd[1]: Started libpod-conmon-3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a.scope.
Jan 23 09:54:34 compute-0 podman[103306]: 2026-01-23 09:54:34.854254251 +0000 UTC m=+0.025373622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5e8f23e012b1ec2160805e8176e966aabc6d6334ec4f616f9b05353b484185/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5e8f23e012b1ec2160805e8176e966aabc6d6334ec4f616f9b05353b484185/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5e8f23e012b1ec2160805e8176e966aabc6d6334ec4f616f9b05353b484185/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5e8f23e012b1ec2160805e8176e966aabc6d6334ec4f616f9b05353b484185/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 23 09:54:34 compute-0 podman[103306]: 2026-01-23 09:54:34.994537006 +0000 UTC m=+0.165656377 container init 3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:54:35 compute-0 podman[103306]: 2026-01-23 09:54:35.000551442 +0000 UTC m=+0.171670813 container start 3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 09:54:35 compute-0 podman[103306]: 2026-01-23 09:54:35.004492921 +0000 UTC m=+0.175612282 container attach 3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:54:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:54:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:54:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 23 09:54:35 compute-0 ceph-mon[74335]: osdmap e128: 3 total, 3 up, 3 in
Jan 23 09:54:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 23 09:54:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:54:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 23 09:54:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 23 09:54:35 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 23 09:54:35 compute-0 lvm[103402]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:54:35 compute-0 lvm[103402]: VG ceph_vg0 finished
Jan 23 09:54:35 compute-0 quizzical_kilby[103323]: {}
Jan 23 09:54:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:35.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:54:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 23 09:54:35 compute-0 systemd[1]: libpod-3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a.scope: Deactivated successfully.
Jan 23 09:54:35 compute-0 systemd[1]: libpod-3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a.scope: Consumed 1.471s CPU time.
Jan 23 09:54:35 compute-0 podman[103306]: 2026-01-23 09:54:35.984444604 +0000 UTC m=+1.155563975 container died 3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 09:54:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 23 09:54:36 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 23 09:54:36 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 130 pg[10.1b( v 60'756 (0'0,60'756] local-lis/les=0/0 n=2 ec=59/46 lis/c=128/92 les/c/f=129/93/0 sis=130) [1] r=0 lpr=130 pi=[92,130)/1 luod=0'0 crt=60'756 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 23 09:54:36 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 130 pg[10.1b( v 60'756 (0'0,60'756] local-lis/les=0/0 n=2 ec=59/46 lis/c=128/92 les/c/f=129/93/0 sis=130) [1] r=0 lpr=130 pi=[92,130)/1 crt=60'756 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 09:54:36 compute-0 ceph-mon[74335]: pgmap v21: 353 pgs: 1 active+recovering+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 2.3 KiB/s wr, 7 op/s; 1/227 objects misplaced (0.441%); 27 B/s, 1 objects/s recovering
Jan 23 09:54:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 23 09:54:36 compute-0 ceph-mon[74335]: osdmap e129: 3 total, 3 up, 3 in
Jan 23 09:54:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf5e8f23e012b1ec2160805e8176e966aabc6d6334ec4f616f9b05353b484185-merged.mount: Deactivated successfully.
Jan 23 09:54:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=infra.usagestats t=2026-01-23T09:54:36.516981437Z level=info msg="Usage stats are ready to report"
Jan 23 09:54:36 compute-0 podman[103306]: 2026-01-23 09:54:36.558647678 +0000 UTC m=+1.729767029 container remove 3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kilby, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 09:54:36 compute-0 systemd[1]: libpod-conmon-3d579df259807320b87ef86197205099eeb731d1dc3b533f324702f72fd8a88a.scope: Deactivated successfully.
Jan 23 09:54:36 compute-0 sudo[103195]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 23 09:54:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:36 compute-0 sudo[103421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:54:36 compute-0 sudo[103420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:54:36 compute-0 sudo[103421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:36 compute-0 sudo[103420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v24: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 1023 B/s wr, 4 op/s; 27 B/s, 0 objects/s recovering
Jan 23 09:54:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 23 09:54:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 23 09:54:36 compute-0 sudo[103421]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:36 compute-0 sudo[103420]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:36 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 23 09:54:36 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 23 09:54:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 23 09:54:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:54:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 23 09:54:36 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:54:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:36 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:36 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 09:54:36 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 09:54:36 compute-0 sudo[103470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:36 compute-0 sudo[103470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:36 compute-0 sudo[103470]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:37 compute-0 sudo[103495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:37 compute-0 sudo[103495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 23 09:54:37 compute-0 ceph-mon[74335]: osdmap e130: 3 total, 3 up, 3 in
Jan 23 09:54:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 23 09:54:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:54:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:54:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 23 09:54:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 23 09:54:37 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 23 09:54:37 compute-0 ceph-osd[82641]: osd.1 pg_epoch: 131 pg[10.1b( v 60'756 (0'0,60'756] local-lis/les=130/131 n=2 ec=59/46 lis/c=128/92 les/c/f=129/93/0 sis=130) [1] r=0 lpr=130 pi=[92,130)/1 crt=60'756 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 09:54:37 compute-0 podman[103534]: 2026-01-23 09:54:37.394650015 +0000 UTC m=+0.047014430 container create 79ac1531e33e8e00a0b5fb26c7e649485de0dd64dc99b8b87cc7a6507b2e6304 (image=quay.io/ceph/ceph:v19, name=serene_albattani, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 09:54:37 compute-0 systemd[1]: Started libpod-conmon-79ac1531e33e8e00a0b5fb26c7e649485de0dd64dc99b8b87cc7a6507b2e6304.scope.
Jan 23 09:54:37 compute-0 podman[103534]: 2026-01-23 09:54:37.374227331 +0000 UTC m=+0.026591766 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:54:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:37 compute-0 podman[103534]: 2026-01-23 09:54:37.499330277 +0000 UTC m=+0.151694722 container init 79ac1531e33e8e00a0b5fb26c7e649485de0dd64dc99b8b87cc7a6507b2e6304 (image=quay.io/ceph/ceph:v19, name=serene_albattani, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 23 09:54:37 compute-0 podman[103534]: 2026-01-23 09:54:37.508799189 +0000 UTC m=+0.161163614 container start 79ac1531e33e8e00a0b5fb26c7e649485de0dd64dc99b8b87cc7a6507b2e6304 (image=quay.io/ceph/ceph:v19, name=serene_albattani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:37 compute-0 podman[103534]: 2026-01-23 09:54:37.512935413 +0000 UTC m=+0.165299848 container attach 79ac1531e33e8e00a0b5fb26c7e649485de0dd64dc99b8b87cc7a6507b2e6304 (image=quay.io/ceph/ceph:v19, name=serene_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 09:54:37 compute-0 serene_albattani[103550]: 167 167
Jan 23 09:54:37 compute-0 systemd[1]: libpod-79ac1531e33e8e00a0b5fb26c7e649485de0dd64dc99b8b87cc7a6507b2e6304.scope: Deactivated successfully.
Jan 23 09:54:37 compute-0 podman[103534]: 2026-01-23 09:54:37.516621615 +0000 UTC m=+0.168986030 container died 79ac1531e33e8e00a0b5fb26c7e649485de0dd64dc99b8b87cc7a6507b2e6304 (image=quay.io/ceph/ceph:v19, name=serene_albattani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b07e56c6d4a5562559512471ce54abd21ab8403a444d66536be8a39f4225699-merged.mount: Deactivated successfully.
Jan 23 09:54:37 compute-0 podman[103534]: 2026-01-23 09:54:37.563392657 +0000 UTC m=+0.215757072 container remove 79ac1531e33e8e00a0b5fb26c7e649485de0dd64dc99b8b87cc7a6507b2e6304 (image=quay.io/ceph/ceph:v19, name=serene_albattani, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 23 09:54:37 compute-0 systemd[1]: libpod-conmon-79ac1531e33e8e00a0b5fb26c7e649485de0dd64dc99b8b87cc7a6507b2e6304.scope: Deactivated successfully.
Jan 23 09:54:37 compute-0 sudo[103495]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:37 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.nbdygh (monmap changed)...
Jan 23 09:54:37 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.nbdygh (monmap changed)...
Jan 23 09:54:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.nbdygh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 23 09:54:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nbdygh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 09:54:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 09:54:37 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 09:54:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:37 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:37 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.nbdygh on compute-0
Jan 23 09:54:37 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.nbdygh on compute-0
Jan 23 09:54:37 compute-0 sudo[103567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:37 compute-0 sudo[103567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:37 compute-0 sudo[103567]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:37 compute-0 sudo[103592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:37 compute-0 sudo[103592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:38.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:38 compute-0 podman[103633]: 2026-01-23 09:54:38.321316107 +0000 UTC m=+0.066875859 container create c37fee6b29c4b4d0fd4525440288ec9c2f8b6e31bc191405f99d363d79003b4f (image=quay.io/ceph/ceph:v19, name=charming_mirzakhani, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:54:38 compute-0 ceph-mon[74335]: pgmap v24: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 1023 B/s wr, 4 op/s; 27 B/s, 0 objects/s recovering
Jan 23 09:54:38 compute-0 ceph-mon[74335]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 23 09:54:38 compute-0 ceph-mon[74335]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 09:54:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 23 09:54:38 compute-0 ceph-mon[74335]: osdmap e131: 3 total, 3 up, 3 in
Jan 23 09:54:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nbdygh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 09:54:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 09:54:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:38 compute-0 systemd[1]: Started libpod-conmon-c37fee6b29c4b4d0fd4525440288ec9c2f8b6e31bc191405f99d363d79003b4f.scope.
Jan 23 09:54:38 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:38 compute-0 podman[103633]: 2026-01-23 09:54:38.296015268 +0000 UTC m=+0.041575050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 23 09:54:38 compute-0 podman[103633]: 2026-01-23 09:54:38.400100833 +0000 UTC m=+0.145660585 container init c37fee6b29c4b4d0fd4525440288ec9c2f8b6e31bc191405f99d363d79003b4f (image=quay.io/ceph/ceph:v19, name=charming_mirzakhani, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:54:38 compute-0 podman[103633]: 2026-01-23 09:54:38.407107717 +0000 UTC m=+0.152667469 container start c37fee6b29c4b4d0fd4525440288ec9c2f8b6e31bc191405f99d363d79003b4f (image=quay.io/ceph/ceph:v19, name=charming_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 23 09:54:38 compute-0 charming_mirzakhani[103650]: 167 167
Jan 23 09:54:38 compute-0 podman[103633]: 2026-01-23 09:54:38.409957456 +0000 UTC m=+0.155517228 container attach c37fee6b29c4b4d0fd4525440288ec9c2f8b6e31bc191405f99d363d79003b4f (image=quay.io/ceph/ceph:v19, name=charming_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:54:38 compute-0 systemd[1]: libpod-c37fee6b29c4b4d0fd4525440288ec9c2f8b6e31bc191405f99d363d79003b4f.scope: Deactivated successfully.
Jan 23 09:54:38 compute-0 podman[103633]: 2026-01-23 09:54:38.413668798 +0000 UTC m=+0.159228570 container died c37fee6b29c4b4d0fd4525440288ec9c2f8b6e31bc191405f99d363d79003b4f (image=quay.io/ceph/ceph:v19, name=charming_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 09:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2f16369c88f5dc5426659fee1f2adc15651b9b8e3171c780ec16a87c85ad38-merged.mount: Deactivated successfully.
Jan 23 09:54:38 compute-0 podman[103633]: 2026-01-23 09:54:38.4586324 +0000 UTC m=+0.204192152 container remove c37fee6b29c4b4d0fd4525440288ec9c2f8b6e31bc191405f99d363d79003b4f (image=quay.io/ceph/ceph:v19, name=charming_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 09:54:38 compute-0 systemd[1]: libpod-conmon-c37fee6b29c4b4d0fd4525440288ec9c2f8b6e31bc191405f99d363d79003b4f.scope: Deactivated successfully.
Jan 23 09:54:38 compute-0 sudo[103592]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 23 09:54:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 23 09:54:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 23 09:54:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:54:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:38 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 23 09:54:38 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 23 09:54:38 compute-0 sudo[103667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:38 compute-0 sudo[103667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:38 compute-0 sudo[103667]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:38 compute-0 sudo[103692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:38 compute-0 sudo[103692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 883 B/s wr, 4 op/s; 23 B/s, 0 objects/s recovering
Jan 23 09:54:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 23 09:54:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 23 09:54:39 compute-0 podman[103733]: 2026-01-23 09:54:39.079989746 +0000 UTC m=+0.081165143 container create 4aa3d56fbc2368b013adc4aaec76d9f68e9a498b5cb648839f93d0b182a1d37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:39 compute-0 systemd[1]: Started libpod-conmon-4aa3d56fbc2368b013adc4aaec76d9f68e9a498b5cb648839f93d0b182a1d37c.scope.
Jan 23 09:54:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:39 compute-0 podman[103733]: 2026-01-23 09:54:39.059302594 +0000 UTC m=+0.060478011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:39 compute-0 podman[103733]: 2026-01-23 09:54:39.164638455 +0000 UTC m=+0.165813882 container init 4aa3d56fbc2368b013adc4aaec76d9f68e9a498b5cb648839f93d0b182a1d37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_gates, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:54:39 compute-0 podman[103733]: 2026-01-23 09:54:39.170964239 +0000 UTC m=+0.172139646 container start 4aa3d56fbc2368b013adc4aaec76d9f68e9a498b5cb648839f93d0b182a1d37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:54:39 compute-0 podman[103733]: 2026-01-23 09:54:39.176033289 +0000 UTC m=+0.177208706 container attach 4aa3d56fbc2368b013adc4aaec76d9f68e9a498b5cb648839f93d0b182a1d37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_gates, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:39 compute-0 mystifying_gates[103750]: 167 167
Jan 23 09:54:39 compute-0 podman[103733]: 2026-01-23 09:54:39.177606903 +0000 UTC m=+0.178782310 container died 4aa3d56fbc2368b013adc4aaec76d9f68e9a498b5cb648839f93d0b182a1d37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_gates, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:54:39 compute-0 systemd[1]: libpod-4aa3d56fbc2368b013adc4aaec76d9f68e9a498b5cb648839f93d0b182a1d37c.scope: Deactivated successfully.
Jan 23 09:54:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d05d32ba43c7cbcdfcafd61ad46973eb6530c4487e09d30b69fbe52121ba1082-merged.mount: Deactivated successfully.
Jan 23 09:54:39 compute-0 podman[103733]: 2026-01-23 09:54:39.222321108 +0000 UTC m=+0.223496505 container remove 4aa3d56fbc2368b013adc4aaec76d9f68e9a498b5cb648839f93d0b182a1d37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:54:39 compute-0 systemd[1]: libpod-conmon-4aa3d56fbc2368b013adc4aaec76d9f68e9a498b5cb648839f93d0b182a1d37c.scope: Deactivated successfully.
Jan 23 09:54:39 compute-0 sudo[103692]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:39 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 23 09:54:39 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 23 09:54:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 23 09:54:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 23 09:54:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:39 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Jan 23 09:54:39 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Jan 23 09:54:39 compute-0 ceph-mon[74335]: Reconfiguring mgr.compute-0.nbdygh (monmap changed)...
Jan 23 09:54:39 compute-0 ceph-mon[74335]: Reconfiguring daemon mgr.compute-0.nbdygh on compute-0
Jan 23 09:54:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:54:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 23 09:54:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:39 compute-0 sudo[103769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:39 compute-0 sudo[103769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:39 compute-0 sudo[103769]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:39 compute-0 sudo[103794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:39 compute-0 sudo[103794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 23 09:54:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 23 09:54:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 23 09:54:39 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 23 09:54:39 compute-0 podman[103836]: 2026-01-23 09:54:39.799660489 +0000 UTC m=+0.048491501 container create efb2bc74ac4adc6e519e7bef9698d30aa6bc2788887281c1e521ccb0c9263dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:54:39 compute-0 systemd[1]: Started libpod-conmon-efb2bc74ac4adc6e519e7bef9698d30aa6bc2788887281c1e521ccb0c9263dc9.scope.
Jan 23 09:54:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:39 compute-0 podman[103836]: 2026-01-23 09:54:39.874427574 +0000 UTC m=+0.123258616 container init efb2bc74ac4adc6e519e7bef9698d30aa6bc2788887281c1e521ccb0c9263dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:54:39 compute-0 podman[103836]: 2026-01-23 09:54:39.781988191 +0000 UTC m=+0.030819223 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:39 compute-0 podman[103836]: 2026-01-23 09:54:39.881692245 +0000 UTC m=+0.130523257 container start efb2bc74ac4adc6e519e7bef9698d30aa6bc2788887281c1e521ccb0c9263dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:54:39 compute-0 reverent_dirac[103852]: 167 167
Jan 23 09:54:39 compute-0 podman[103836]: 2026-01-23 09:54:39.886695353 +0000 UTC m=+0.135526365 container attach efb2bc74ac4adc6e519e7bef9698d30aa6bc2788887281c1e521ccb0c9263dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:54:39 compute-0 systemd[1]: libpod-efb2bc74ac4adc6e519e7bef9698d30aa6bc2788887281c1e521ccb0c9263dc9.scope: Deactivated successfully.
Jan 23 09:54:39 compute-0 podman[103836]: 2026-01-23 09:54:39.887897147 +0000 UTC m=+0.136728179 container died efb2bc74ac4adc6e519e7bef9698d30aa6bc2788887281c1e521ccb0c9263dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:39.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4c37f910de7d50ccc1e911f064e00e7148b0d45667ae766dd7f175b8cb76d81-merged.mount: Deactivated successfully.
Jan 23 09:54:39 compute-0 podman[103836]: 2026-01-23 09:54:39.930871604 +0000 UTC m=+0.179702616 container remove efb2bc74ac4adc6e519e7bef9698d30aa6bc2788887281c1e521ccb0c9263dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 09:54:39 compute-0 systemd[1]: libpod-conmon-efb2bc74ac4adc6e519e7bef9698d30aa6bc2788887281c1e521ccb0c9263dc9.scope: Deactivated successfully.
Jan 23 09:54:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:54:39] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000006:nfs.cephfs.2: -2
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:54:39] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 09:54:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:39 : epoch 69734553 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 09:54:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:40 compute-0 sudo[103794]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:40 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c50000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:40 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 23 09:54:40 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 23 09:54:40 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 23 09:54:40 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 23 09:54:40 compute-0 ceph-mon[74335]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 23 09:54:40 compute-0 ceph-mon[74335]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 23 09:54:40 compute-0 ceph-mon[74335]: pgmap v26: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 883 B/s wr, 4 op/s; 23 B/s, 0 objects/s recovering
Jan 23 09:54:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:40 compute-0 ceph-mon[74335]: Reconfiguring osd.1 (monmap changed)...
Jan 23 09:54:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 23 09:54:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:40 compute-0 ceph-mon[74335]: Reconfiguring daemon osd.1 on compute-0
Jan 23 09:54:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 23 09:54:40 compute-0 ceph-mon[74335]: osdmap e132: 3 total, 3 up, 3 in
Jan 23 09:54:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:40 compute-0 sudo[103893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:40 compute-0 sudo[103893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:40 compute-0 sudo[103893]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 23 09:54:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 23 09:54:40 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 23 09:54:40 compute-0 sudo[103918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:40 compute-0 sudo[103918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v29: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:54:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 23 09:54:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:54:41 compute-0 podman[103959]: 2026-01-23 09:54:41.00031954 +0000 UTC m=+0.066237511 volume create ed622318d48297e53ef5d6c4c8eb487baa5b149b07c750d204aba145dd2fff75
Jan 23 09:54:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:54:41 compute-0 podman[103959]: 2026-01-23 09:54:41.012766474 +0000 UTC m=+0.078684425 container create 1cdc8eb70abe530321f9e7cf0796e3b5a14a7aa51ae478599328bb9e1b269d2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_jang, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 systemd[1]: Started libpod-conmon-1cdc8eb70abe530321f9e7cf0796e3b5a14a7aa51ae478599328bb9e1b269d2f.scope.
Jan 23 09:54:41 compute-0 podman[103959]: 2026-01-23 09:54:40.981755197 +0000 UTC m=+0.047673148 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 23 09:54:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95c05d51575eb15dedf7a16cfac3d43993d35b3a6a5f03c4bb2ad12ab246285/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:41 compute-0 podman[103959]: 2026-01-23 09:54:41.106983807 +0000 UTC m=+0.172901778 container init 1cdc8eb70abe530321f9e7cf0796e3b5a14a7aa51ae478599328bb9e1b269d2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_jang, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 podman[103959]: 2026-01-23 09:54:41.114215117 +0000 UTC m=+0.180133068 container start 1cdc8eb70abe530321f9e7cf0796e3b5a14a7aa51ae478599328bb9e1b269d2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_jang, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 nostalgic_jang[103976]: 65534 65534
Jan 23 09:54:41 compute-0 systemd[1]: libpod-1cdc8eb70abe530321f9e7cf0796e3b5a14a7aa51ae478599328bb9e1b269d2f.scope: Deactivated successfully.
Jan 23 09:54:41 compute-0 podman[103959]: 2026-01-23 09:54:41.119666507 +0000 UTC m=+0.185584458 container attach 1cdc8eb70abe530321f9e7cf0796e3b5a14a7aa51ae478599328bb9e1b269d2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_jang, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 podman[103959]: 2026-01-23 09:54:41.120068718 +0000 UTC m=+0.185986669 container died 1cdc8eb70abe530321f9e7cf0796e3b5a14a7aa51ae478599328bb9e1b269d2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_jang, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a95c05d51575eb15dedf7a16cfac3d43993d35b3a6a5f03c4bb2ad12ab246285-merged.mount: Deactivated successfully.
Jan 23 09:54:41 compute-0 podman[103959]: 2026-01-23 09:54:41.170464751 +0000 UTC m=+0.236382702 container remove 1cdc8eb70abe530321f9e7cf0796e3b5a14a7aa51ae478599328bb9e1b269d2f (image=quay.io/prometheus/alertmanager:v0.25.0, name=nostalgic_jang, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 podman[103959]: 2026-01-23 09:54:41.173789503 +0000 UTC m=+0.239707464 volume remove ed622318d48297e53ef5d6c4c8eb487baa5b149b07c750d204aba145dd2fff75
Jan 23 09:54:41 compute-0 systemd[1]: libpod-conmon-1cdc8eb70abe530321f9e7cf0796e3b5a14a7aa51ae478599328bb9e1b269d2f.scope: Deactivated successfully.
Jan 23 09:54:41 compute-0 podman[103993]: 2026-01-23 09:54:41.280883461 +0000 UTC m=+0.087261171 volume create 8225787027b50a6dc3ead1109cb6cbf8853a1de635f9132edf229724ca405a54
Jan 23 09:54:41 compute-0 podman[103993]: 2026-01-23 09:54:41.288458481 +0000 UTC m=+0.094836191 container create 60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 podman[103993]: 2026-01-23 09:54:41.266756891 +0000 UTC m=+0.073134601 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 23 09:54:41 compute-0 systemd[1]: Started libpod-conmon-60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13.scope.
Jan 23 09:54:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c218cd2da5c2c507fccc517b7e3a295be97611b162ca6cb05bd0fc2e720f5a4b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:41 compute-0 podman[103993]: 2026-01-23 09:54:41.486887663 +0000 UTC m=+0.293265393 container init 60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 podman[103993]: 2026-01-23 09:54:41.493411763 +0000 UTC m=+0.299789473 container start 60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 intelligent_elion[104009]: 65534 65534
Jan 23 09:54:41 compute-0 systemd[1]: libpod-60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13.scope: Deactivated successfully.
Jan 23 09:54:41 compute-0 conmon[104009]: conmon 60b165de23e0fd889907 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13.scope/container/memory.events
Jan 23 09:54:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:41 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 23 09:54:41 compute-0 podman[103993]: 2026-01-23 09:54:41.628784163 +0000 UTC m=+0.435161903 container attach 60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 podman[103993]: 2026-01-23 09:54:41.629627866 +0000 UTC m=+0.436005576 container died 60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:41 compute-0 ceph-mon[74335]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 23 09:54:41 compute-0 ceph-mon[74335]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 23 09:54:41 compute-0 ceph-mon[74335]: osdmap e133: 3 total, 3 up, 3 in
Jan 23 09:54:41 compute-0 ceph-mon[74335]: pgmap v29: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Jan 23 09:54:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 09:54:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:41 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 09:54:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:41.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 09:54:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:54:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 23 09:54:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 23 09:54:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 09:54:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:42.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 09:54:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095442 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:54:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:42 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c30001240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c218cd2da5c2c507fccc517b7e3a295be97611b162ca6cb05bd0fc2e720f5a4b-merged.mount: Deactivated successfully.
Jan 23 09:54:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v31: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 188 B/s wr, 1 op/s
Jan 23 09:54:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 23 09:54:43 compute-0 podman[103993]: 2026-01-23 09:54:43.011914625 +0000 UTC m=+1.818292335 container remove 60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13 (image=quay.io/prometheus/alertmanager:v0.25.0, name=intelligent_elion, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 23 09:54:43 compute-0 podman[103993]: 2026-01-23 09:54:43.242482886 +0000 UTC m=+2.048860596 volume remove 8225787027b50a6dc3ead1109cb6cbf8853a1de635f9132edf229724ca405a54
Jan 23 09:54:43 compute-0 systemd[1]: libpod-conmon-60b165de23e0fd889907b5a7c2d621ade4595ad18d7c965824e7ebe3c6009f13.scope: Deactivated successfully.
Jan 23 09:54:43 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 23 09:54:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:43 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:43 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:54:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:43 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:43.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:43 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 09:54:43 compute-0 ceph-mon[74335]: osdmap e134: 3 total, 3 up, 3 in
Jan 23 09:54:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:44.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:44 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c280016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[96734]: ts=2026-01-23T09:54:44.537Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Jan 23 09:54:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 23 09:54:44 compute-0 podman[104067]: 2026-01-23 09:54:44.568325265 +0000 UTC m=+0.837223651 container died c12cd358f71085f8f02219ac258799ba47dc04ec4aa13a22c98c7af3dc91dab0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v33: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 588 B/s rd, 196 B/s wr, 1 op/s
Jan 23 09:54:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 23 09:54:45 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 23 09:54:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:45 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c30001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:45 compute-0 ceph-mon[74335]: pgmap v31: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 188 B/s wr, 1 op/s
Jan 23 09:54:45 compute-0 ceph-mon[74335]: osdmap e135: 3 total, 3 up, 3 in
Jan 23 09:54:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-26d29d3a0b823fda93b77374ab3157f110bd53122e2065aced00e2ab5bc7e469-merged.mount: Deactivated successfully.
Jan 23 09:54:45 compute-0 podman[104067]: 2026-01-23 09:54:45.730690548 +0000 UTC m=+1.999588924 container remove c12cd358f71085f8f02219ac258799ba47dc04ec4aa13a22c98c7af3dc91dab0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:45 compute-0 podman[104067]: 2026-01-23 09:54:45.735785469 +0000 UTC m=+2.004683855 volume remove b5efb888005db14847d71c66c95453de688fc023ad156b0e822ac1aa28a81a46
Jan 23 09:54:45 compute-0 bash[104067]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0
Jan 23 09:54:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:45 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:45 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@alertmanager.compute-0.service: Deactivated successfully.
Jan 23 09:54:45 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:54:45 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@alertmanager.compute-0.service: Consumed 1.111s CPU time.
Jan 23 09:54:45 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:54:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:45.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:54:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 23 09:54:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 23 09:54:46 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 23 09:54:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 09:54:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:46.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 09:54:46 compute-0 podman[104170]: 2026-01-23 09:54:46.097763989 +0000 UTC m=+0.045784186 volume create 33a912ab47a2ad38542c0ddb2b756fec576f650f88ec098a2d56d3d0ccc267f2
Jan 23 09:54:46 compute-0 podman[104170]: 2026-01-23 09:54:46.107675123 +0000 UTC m=+0.055695320 container create a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:46 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6c22edc10962e45899f80643a75fd409e4432e6192b478a2ee6ad5d9f42119/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6c22edc10962e45899f80643a75fd409e4432e6192b478a2ee6ad5d9f42119/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:46 compute-0 podman[104170]: 2026-01-23 09:54:46.166119348 +0000 UTC m=+0.114139565 container init a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:46 compute-0 podman[104170]: 2026-01-23 09:54:46.171296261 +0000 UTC m=+0.119316458 container start a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:46 compute-0 bash[104170]: a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982
Jan 23 09:54:46 compute-0 podman[104170]: 2026-01-23 09:54:46.081931822 +0000 UTC m=+0.029952039 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 23 09:54:46 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:54:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:46.209Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 23 09:54:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:46.209Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 23 09:54:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:46.226Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Jan 23 09:54:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:46.228Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 23 09:54:46 compute-0 sudo[103918]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:46 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:46 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:46.275Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 23 09:54:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:46.276Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 23 09:54:46 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 23 09:54:46 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 23 09:54:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:46.282Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 23 09:54:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:46.282Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 23 09:54:46 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Jan 23 09:54:46 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Jan 23 09:54:46 compute-0 sudo[104206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:46 compute-0 sudo[104206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:46 compute-0 sudo[104206]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:46 compute-0 sudo[104231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 09:54:46 compute-0 sudo[104231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:46 compute-0 ceph-mon[74335]: pgmap v33: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 588 B/s rd, 196 B/s wr, 1 op/s
Jan 23 09:54:46 compute-0 ceph-mon[74335]: osdmap e136: 3 total, 3 up, 3 in
Jan 23 09:54:46 compute-0 ceph-mon[74335]: osdmap e137: 3 total, 3 up, 3 in
Jan 23 09:54:46 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:46 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:46 compute-0 ceph-mon[74335]: Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 23 09:54:46 compute-0 ceph-mon[74335]: Reconfiguring daemon grafana.compute-0 on compute-0
Jan 23 09:54:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 2 objects/s recovering
Jan 23 09:54:46 compute-0 podman[104277]: 2026-01-23 09:54:46.998771572 +0000 UTC m=+0.055806723 container create c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817 (image=quay.io/ceph/grafana:10.4.0, name=suspicious_blackwell, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 23 09:54:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 23 09:54:47 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 23 09:54:47 compute-0 systemd[1]: Started libpod-conmon-c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817.scope.
Jan 23 09:54:47 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:47 compute-0 podman[104277]: 2026-01-23 09:54:46.977545165 +0000 UTC m=+0.034580346 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 23 09:54:47 compute-0 podman[104277]: 2026-01-23 09:54:47.077893258 +0000 UTC m=+0.134928429 container init c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817 (image=quay.io/ceph/grafana:10.4.0, name=suspicious_blackwell, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 podman[104277]: 2026-01-23 09:54:47.085959671 +0000 UTC m=+0.142994822 container start c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817 (image=quay.io/ceph/grafana:10.4.0, name=suspicious_blackwell, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 podman[104277]: 2026-01-23 09:54:47.089607241 +0000 UTC m=+0.146642422 container attach c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817 (image=quay.io/ceph/grafana:10.4.0, name=suspicious_blackwell, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 systemd[1]: libpod-c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817.scope: Deactivated successfully.
Jan 23 09:54:47 compute-0 suspicious_blackwell[104293]: 472 0
Jan 23 09:54:47 compute-0 conmon[104293]: conmon c5bbf7331cf728fc5b01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817.scope/container/memory.events
Jan 23 09:54:47 compute-0 podman[104277]: 2026-01-23 09:54:47.092882722 +0000 UTC m=+0.149917893 container died c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817 (image=quay.io/ceph/grafana:10.4.0, name=suspicious_blackwell, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-011a3723e126c3ec5366083e9a91e326a2b79e8290474098076a88c8c9f7e234-merged.mount: Deactivated successfully.
Jan 23 09:54:47 compute-0 podman[104277]: 2026-01-23 09:54:47.13732989 +0000 UTC m=+0.194365041 container remove c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817 (image=quay.io/ceph/grafana:10.4.0, name=suspicious_blackwell, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 systemd[1]: libpod-conmon-c5bbf7331cf728fc5b012698526cb27134b5669cf9b859d2174539be0313c817.scope: Deactivated successfully.
Jan 23 09:54:47 compute-0 podman[104309]: 2026-01-23 09:54:47.214675067 +0000 UTC m=+0.049408986 container create 70c4a89470884216771b86d30215a9e8cf0c38e43fa087f5ccfcc0c82a51d9c4 (image=quay.io/ceph/grafana:10.4.0, name=heuristic_kilby, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 systemd[1]: Started libpod-conmon-70c4a89470884216771b86d30215a9e8cf0c38e43fa087f5ccfcc0c82a51d9c4.scope.
Jan 23 09:54:47 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:47 compute-0 podman[104309]: 2026-01-23 09:54:47.192051262 +0000 UTC m=+0.026785211 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 23 09:54:47 compute-0 podman[104309]: 2026-01-23 09:54:47.286755718 +0000 UTC m=+0.121489657 container init 70c4a89470884216771b86d30215a9e8cf0c38e43fa087f5ccfcc0c82a51d9c4 (image=quay.io/ceph/grafana:10.4.0, name=heuristic_kilby, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 podman[104309]: 2026-01-23 09:54:47.292412624 +0000 UTC m=+0.127146543 container start 70c4a89470884216771b86d30215a9e8cf0c38e43fa087f5ccfcc0c82a51d9c4 (image=quay.io/ceph/grafana:10.4.0, name=heuristic_kilby, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 heuristic_kilby[104325]: 472 0
Jan 23 09:54:47 compute-0 systemd[1]: libpod-70c4a89470884216771b86d30215a9e8cf0c38e43fa087f5ccfcc0c82a51d9c4.scope: Deactivated successfully.
Jan 23 09:54:47 compute-0 podman[104309]: 2026-01-23 09:54:47.297287419 +0000 UTC m=+0.132021368 container attach 70c4a89470884216771b86d30215a9e8cf0c38e43fa087f5ccfcc0c82a51d9c4 (image=quay.io/ceph/grafana:10.4.0, name=heuristic_kilby, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 podman[104309]: 2026-01-23 09:54:47.297780733 +0000 UTC m=+0.132514672 container died 70c4a89470884216771b86d30215a9e8cf0c38e43fa087f5ccfcc0c82a51d9c4 (image=quay.io/ceph/grafana:10.4.0, name=heuristic_kilby, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e96197d8ef67417d64602778e8068716359353acb2c035c3285fd9ce93aab034-merged.mount: Deactivated successfully.
Jan 23 09:54:47 compute-0 podman[104309]: 2026-01-23 09:54:47.348028801 +0000 UTC m=+0.182762720 container remove 70c4a89470884216771b86d30215a9e8cf0c38e43fa087f5ccfcc0c82a51d9c4 (image=quay.io/ceph/grafana:10.4.0, name=heuristic_kilby, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 systemd[1]: libpod-conmon-70c4a89470884216771b86d30215a9e8cf0c38e43fa087f5ccfcc0c82a51d9c4.scope: Deactivated successfully.
Jan 23 09:54:47 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:54:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:47 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c280016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=server t=2026-01-23T09:54:47.624343425Z level=info msg="Shutdown started" reason="System signal: terminated"
Jan 23 09:54:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=tracing t=2026-01-23T09:54:47.625418615Z level=info msg="Closing tracing"
Jan 23 09:54:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=grafana-apiserver t=2026-01-23T09:54:47.625816236Z level=info msg="StorageObjectCountTracker pruner is exiting"
Jan 23 09:54:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=ticker t=2026-01-23T09:54:47.625823656Z level=info msg=stopped last_tick=2026-01-23T09:54:40Z
Jan 23 09:54:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[97516]: logger=sqlstore.transactions t=2026-01-23T09:54:47.638043233Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 23 09:54:47 compute-0 podman[104375]: 2026-01-23 09:54:47.658010405 +0000 UTC m=+0.074064477 container died a54bba5b68ea44a8d28033c77b2e521ac5290ce8b599976a9c0c4e403ef44f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-adafe46092c5a544096e22a442a1d44aeb796056587fe58494295a7d0ee6deb4-merged.mount: Deactivated successfully.
Jan 23 09:54:47 compute-0 podman[104375]: 2026-01-23 09:54:47.705382464 +0000 UTC m=+0.121436526 container remove a54bba5b68ea44a8d28033c77b2e521ac5290ce8b599976a9c0c4e403ef44f21 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:47 compute-0 bash[104375]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0
Jan 23 09:54:47 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@grafana.compute-0.service: Deactivated successfully.
Jan 23 09:54:47 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:54:47 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@grafana.compute-0.service: Consumed 4.824s CPU time.
Jan 23 09:54:47 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:54:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:47 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c30001d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:47.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:48 compute-0 ceph-mon[74335]: pgmap v36: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 2 objects/s recovering
Jan 23 09:54:48 compute-0 ceph-mon[74335]: osdmap e138: 3 total, 3 up, 3 in
Jan 23 09:54:48 compute-0 podman[104482]: 2026-01-23 09:54:48.074476321 +0000 UTC m=+0.053626663 container create 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:48.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:48 compute-0 podman[104482]: 2026-01-23 09:54:48.051207618 +0000 UTC m=+0.030357980 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:48 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c34001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd7542535c4ceaeda21044a222e74565124785c29e9096a48b5ed65c4a43f2dc/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd7542535c4ceaeda21044a222e74565124785c29e9096a48b5ed65c4a43f2dc/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd7542535c4ceaeda21044a222e74565124785c29e9096a48b5ed65c4a43f2dc/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd7542535c4ceaeda21044a222e74565124785c29e9096a48b5ed65c4a43f2dc/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd7542535c4ceaeda21044a222e74565124785c29e9096a48b5ed65c4a43f2dc/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:48 compute-0 podman[104482]: 2026-01-23 09:54:48.193687414 +0000 UTC m=+0.172837776 container init 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:48 compute-0 podman[104482]: 2026-01-23 09:54:48.198898968 +0000 UTC m=+0.178049310 container start 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:48 compute-0 bash[104482]: 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28
Jan 23 09:54:48 compute-0 systemd[1]: Started Ceph grafana.compute-0 for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:48.229Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.00089139s
Jan 23 09:54:48 compute-0 sudo[104231]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:48 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:48 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:48 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 23 09:54:48 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 23 09:54:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 23 09:54:48 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:54:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:48 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 23 09:54:48 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421057476Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-23T09:54:48Z
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421491628Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421505759Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421510469Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421517689Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421521409Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421525089Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421528709Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421532569Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421536359Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.42154012Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.42154632Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.42155011Z level=info msg=Target target=[all]
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.42155814Z level=info msg="Path Home" path=/usr/share/grafana
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.42156211Z level=info msg="Path Data" path=/var/lib/grafana
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.4215665Z level=info msg="Path Logs" path=/var/log/grafana
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.42157096Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.42157501Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=settings t=2026-01-23T09:54:48.421581691Z level=info msg="App mode production"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=sqlstore t=2026-01-23T09:54:48.422900437Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=sqlstore t=2026-01-23T09:54:48.423481163Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=migrator t=2026-01-23T09:54:48.428254145Z level=info msg="Starting DB migrations"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=migrator t=2026-01-23T09:54:48.450771787Z level=info msg="migrations completed" performed=0 skipped=547 duration=696.419µs
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=sqlstore t=2026-01-23T09:54:48.452212547Z level=info msg="Created default organization"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=secrets t=2026-01-23T09:54:48.452885456Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=plugin.store t=2026-01-23T09:54:48.473285679Z level=info msg="Loading plugins..."
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=local.finder t=2026-01-23T09:54:48.561887827Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=plugin.store t=2026-01-23T09:54:48.561932578Z level=info msg="Plugins loaded" count=55 duration=88.650669ms
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=query_data t=2026-01-23T09:54:48.566472004Z level=info msg="Query Service initialization"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=live.push_http t=2026-01-23T09:54:48.576809899Z level=info msg="Live Push Gateway initialization"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=ngalert.migration t=2026-01-23T09:54:48.581705424Z level=info msg=Starting
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=ngalert.state.manager t=2026-01-23T09:54:48.594172299Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=infra.usagestats.collector t=2026-01-23T09:54:48.597586033Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=provisioning.datasources t=2026-01-23T09:54:48.600541785Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=provisioning.alerting t=2026-01-23T09:54:48.623333385Z level=info msg="starting to provision alerting"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=provisioning.alerting t=2026-01-23T09:54:48.623385486Z level=info msg="finished to provision alerting"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=grafanaStorageLogger t=2026-01-23T09:54:48.623622683Z level=info msg="Storage starting"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=ngalert.state.manager t=2026-01-23T09:54:48.624023574Z level=info msg="Warming state cache for startup"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=ngalert.multiorg.alertmanager t=2026-01-23T09:54:48.625390461Z level=info msg="Starting MultiOrg Alertmanager"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=http.server t=2026-01-23T09:54:48.626887253Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=http.server t=2026-01-23T09:54:48.627310314Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=ngalert.state.manager t=2026-01-23T09:54:48.662299681Z level=info msg="State cache has been initialized" states=0 duration=38.269127ms
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=ngalert.scheduler t=2026-01-23T09:54:48.662393394Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=ticker t=2026-01-23T09:54:48.662838456Z level=info msg=starting first_tick=2026-01-23T09:54:50Z
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=provisioning.dashboard t=2026-01-23T09:54:48.688830254Z level=info msg="starting to provision dashboards"
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=plugins.update.checker t=2026-01-23T09:54:48.693530214Z level=info msg="Update check succeeded" duration=69.86069ms
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=grafana.update.checker t=2026-01-23T09:54:48.696920828Z level=info msg="Update check succeeded" duration=73.262165ms
Jan 23 09:54:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=provisioning.dashboard t=2026-01-23T09:54:48.705905956Z level=info msg="finished to provision dashboards"
Jan 23 09:54:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 1 objects/s recovering
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:49 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 23 09:54:49 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:49 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Jan 23 09:54:49 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Jan 23 09:54:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=grafana-apiserver t=2026-01-23T09:54:49.268333033Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 23 09:54:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=grafana-apiserver t=2026-01-23T09:54:49.268866688Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 23 09:54:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:49 compute-0 ceph-mon[74335]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 23 09:54:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:54:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:49 compute-0 ceph-mon[74335]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 23 09:54:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 23 09:54:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:49 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:49 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 23 09:54:49 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:54:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:49 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:49 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 23 09:54:49 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 23 09:54:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:49.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:54:49] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:54:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:54:49] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:54:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:54:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:54:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:50.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:50 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:50 compute-0 ceph-mon[74335]: pgmap v38: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 1 objects/s recovering
Jan 23 09:54:50 compute-0 ceph-mon[74335]: Reconfiguring osd.0 (monmap changed)...
Jan 23 09:54:50 compute-0 ceph-mon[74335]: Reconfiguring daemon osd.0 on compute-1
Jan 23 09:54:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:54:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:54:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:54:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:54:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:54:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-1 (unknown last config time)...
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-1 (unknown last config time)...
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-1 on compute-1
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-1 on compute-1
Jan 23 09:54:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v39: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Jan 23 09:54:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:54:51 compute-0 ceph-mon[74335]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 23 09:54:51 compute-0 ceph-mon[74335]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 23 09:54:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:51 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:54:51 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:54:51 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:51 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 23 09:54:51 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 23 09:54:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 23 09:54:51 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:54:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 23 09:54:51 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:54:51 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 23 09:54:51 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 23 09:54:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:51 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:51 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:51.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 09:54:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:52.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 09:54:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:52 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c30002e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:52 compute-0 ceph-mon[74335]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Jan 23 09:54:52 compute-0 ceph-mon[74335]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Jan 23 09:54:52 compute-0 ceph-mon[74335]: pgmap v39: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Jan 23 09:54:52 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:52 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:52 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 09:54:52 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 09:54:52 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:54:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:54:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:52 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-2 (unknown last config time)...
Jan 23 09:54:52 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-2 (unknown last config time)...
Jan 23 09:54:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 23 09:54:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:54:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:52 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:52 compute-0 ceph-mgr[74633]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-2 on compute-2
Jan 23 09:54:52 compute-0 ceph-mgr[74633]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-2 on compute-2
Jan 23 09:54:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 419 B/s rd, 0 op/s; 15 B/s, 1 objects/s recovering
Jan 23 09:54:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 09:54:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 09:54:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Jan 23 09:54:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Jan 23 09:54:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Jan 23 09:54:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 23 09:54:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: [prometheus INFO root] Restarting engine...
Jan 23 09:54:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: [23/Jan/2026:09:54:53] ENGINE Bus STOPPING
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.error] [23/Jan/2026:09:54:53] ENGINE Bus STOPPING
Jan 23 09:54:53 compute-0 ceph-mon[74335]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 23 09:54:53 compute-0 ceph-mon[74335]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 23 09:54:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:53 compute-0 sudo[104557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: [23/Jan/2026:09:54:53] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.error] [23/Jan/2026:09:54:53] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 23 09:54:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: [23/Jan/2026:09:54:53] ENGINE Bus STOPPED
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.error] [23/Jan/2026:09:54:53] ENGINE Bus STOPPED
Jan 23 09:54:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: [23/Jan/2026:09:54:53] ENGINE Bus STARTING
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.error] [23/Jan/2026:09:54:53] ENGINE Bus STARTING
Jan 23 09:54:53 compute-0 sudo[104557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:53 compute-0 sudo[104557]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:53 compute-0 sudo[104593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 09:54:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:53 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c280016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:53 compute-0 sudo[104593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: [23/Jan/2026:09:54:53] ENGINE Serving on http://:::9283
Jan 23 09:54:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: [23/Jan/2026:09:54:53] ENGINE Bus STARTED
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.error] [23/Jan/2026:09:54:53] ENGINE Serving on http://:::9283
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.error] [23/Jan/2026:09:54:53] ENGINE Bus STARTED
Jan 23 09:54:53 compute-0 ceph-mgr[74633]: [prometheus INFO root] Engine started.
Jan 23 09:54:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:53 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:53.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 09:54:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:54.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 09:54:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:54 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:54 compute-0 ceph-mon[74335]: Reconfiguring crash.compute-2 (unknown last config time)...
Jan 23 09:54:54 compute-0 ceph-mon[74335]: Reconfiguring daemon crash.compute-2 on compute-2
Jan 23 09:54:54 compute-0 ceph-mon[74335]: pgmap v40: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 419 B/s rd, 0 op/s; 15 B/s, 1 objects/s recovering
Jan 23 09:54:54 compute-0 ceph-mon[74335]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 23 09:54:54 compute-0 ceph-mon[74335]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 23 09:54:54 compute-0 ceph-mon[74335]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 23 09:54:54 compute-0 podman[104692]: 2026-01-23 09:54:54.523421808 +0000 UTC m=+0.071302471 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 09:54:54 compute-0 podman[104692]: 2026-01-23 09:54:54.635520795 +0000 UTC m=+0.183401458 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 09:54:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v41: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 350 B/s rd, 0 op/s
Jan 23 09:54:55 compute-0 podman[104803]: 2026-01-23 09:54:55.14571306 +0000 UTC m=+0.062663782 container exec 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:55 compute-0 podman[104803]: 2026-01-23 09:54:55.155389188 +0000 UTC m=+0.072339910 container exec_died 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:55 compute-0 ceph-mon[74335]: pgmap v41: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 350 B/s rd, 0 op/s
Jan 23 09:54:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:55 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c30002e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:54:55 compute-0 podman[104897]: 2026-01-23 09:54:55.856404235 +0000 UTC m=+0.303160986 container exec 8b07b0a91308a280be87c57a40f5eda65a176fbdaf3393b6b42491735a49ec88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:55 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:54:55 compute-0 podman[104897]: 2026-01-23 09:54:55.874855805 +0000 UTC m=+0.321612546 container exec_died 8b07b0a91308a280be87c57a40f5eda65a176fbdaf3393b6b42491735a49ec88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 09:54:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:55.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:54:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:56.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:56 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:56 compute-0 podman[104963]: 2026-01-23 09:54:56.167174741 +0000 UTC m=+0.069066569 container exec 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 09:54:56 compute-0 podman[104963]: 2026-01-23 09:54:56.184980253 +0000 UTC m=+0.086872061 container exec_died 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 09:54:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:54:56.230Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002194425s
Jan 23 09:54:56 compute-0 podman[105030]: 2026-01-23 09:54:56.487617163 +0000 UTC m=+0.056715068 container exec 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, vcs-type=git)
Jan 23 09:54:56 compute-0 podman[105030]: 2026-01-23 09:54:56.500798327 +0000 UTC m=+0.069896202 container exec_died 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, architecture=x86_64, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 23 09:54:56 compute-0 podman[105093]: 2026-01-23 09:54:56.717245647 +0000 UTC m=+0.054985400 container exec a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:56 compute-0 podman[105093]: 2026-01-23 09:54:56.748034628 +0000 UTC m=+0.085774351 container exec_died a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v42: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 23 09:54:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:56 compute-0 sudo[105137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:54:56 compute-0 sudo[105137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:56 compute-0 sudo[105137]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:56 compute-0 podman[105186]: 2026-01-23 09:54:56.995233467 +0000 UTC m=+0.053408326 container exec 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:57 compute-0 podman[105186]: 2026-01-23 09:54:57.202646568 +0000 UTC m=+0.260821427 container exec_died 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:54:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:57 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:57 compute-0 podman[105299]: 2026-01-23 09:54:57.598012341 +0000 UTC m=+0.055377421 container exec 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:57 compute-0 podman[105299]: 2026-01-23 09:54:57.648706931 +0000 UTC m=+0.106071991 container exec_died 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:54:57 compute-0 sudo[104593]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:54:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 286 B/s rd, 0 op/s
Jan 23 09:54:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:54:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:54:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:57 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:57 compute-0 ceph-mon[74335]: pgmap v42: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 23 09:54:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:54:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:54:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:54:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:54:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:54:57 compute-0 sudo[105349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:57 compute-0 sudo[105349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:57 compute-0 sudo[105349]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:54:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:57.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:54:57 compute-0 sudo[105374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 09:54:57 compute-0 sudo[105374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 09:54:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:58.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 09:54:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:58 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:58 compute-0 podman[105439]: 2026-01-23 09:54:58.387917454 +0000 UTC m=+0.047711659 container create dc6d563871ae1a20bdf168695220f57a7b8a3c3eeaa1299045eca6ce7007f9ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 09:54:58 compute-0 systemd[1]: Started libpod-conmon-dc6d563871ae1a20bdf168695220f57a7b8a3c3eeaa1299045eca6ce7007f9ce.scope.
Jan 23 09:54:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:58 compute-0 podman[105439]: 2026-01-23 09:54:58.367771927 +0000 UTC m=+0.027566152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:58 compute-0 podman[105439]: 2026-01-23 09:54:58.477978582 +0000 UTC m=+0.137772807 container init dc6d563871ae1a20bdf168695220f57a7b8a3c3eeaa1299045eca6ce7007f9ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_haslett, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 09:54:58 compute-0 podman[105439]: 2026-01-23 09:54:58.485954262 +0000 UTC m=+0.145748477 container start dc6d563871ae1a20bdf168695220f57a7b8a3c3eeaa1299045eca6ce7007f9ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_haslett, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:54:58 compute-0 podman[105439]: 2026-01-23 09:54:58.489663055 +0000 UTC m=+0.149457290 container attach dc6d563871ae1a20bdf168695220f57a7b8a3c3eeaa1299045eca6ce7007f9ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_haslett, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 09:54:58 compute-0 xenodochial_haslett[105455]: 167 167
Jan 23 09:54:58 compute-0 podman[105439]: 2026-01-23 09:54:58.494859538 +0000 UTC m=+0.154653743 container died dc6d563871ae1a20bdf168695220f57a7b8a3c3eeaa1299045eca6ce7007f9ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_haslett, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:54:58 compute-0 systemd[1]: libpod-dc6d563871ae1a20bdf168695220f57a7b8a3c3eeaa1299045eca6ce7007f9ce.scope: Deactivated successfully.
Jan 23 09:54:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b3e1483919f8b88e416bcb45fb8d472e255c512979ec6840c6edaab737cbcde-merged.mount: Deactivated successfully.
Jan 23 09:54:58 compute-0 podman[105439]: 2026-01-23 09:54:58.535990795 +0000 UTC m=+0.195785000 container remove dc6d563871ae1a20bdf168695220f57a7b8a3c3eeaa1299045eca6ce7007f9ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:54:58 compute-0 systemd[1]: libpod-conmon-dc6d563871ae1a20bdf168695220f57a7b8a3c3eeaa1299045eca6ce7007f9ce.scope: Deactivated successfully.
Jan 23 09:54:58 compute-0 podman[105479]: 2026-01-23 09:54:58.697587399 +0000 UTC m=+0.051517974 container create fe8d6c4579831a636d3e77f858e391e199bfebad757246659e4095af852960c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 23 09:54:58 compute-0 systemd[1]: Started libpod-conmon-fe8d6c4579831a636d3e77f858e391e199bfebad757246659e4095af852960c8.scope.
Jan 23 09:54:58 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Jan 23 09:54:58 compute-0 podman[105479]: 2026-01-23 09:54:58.673450532 +0000 UTC m=+0.027381137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b6b6e590110ed3e2474e9d2fdda2a2a742936c4f67edc1dbcaf27bb3f29714/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b6b6e590110ed3e2474e9d2fdda2a2a742936c4f67edc1dbcaf27bb3f29714/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b6b6e590110ed3e2474e9d2fdda2a2a742936c4f67edc1dbcaf27bb3f29714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b6b6e590110ed3e2474e9d2fdda2a2a742936c4f67edc1dbcaf27bb3f29714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b6b6e590110ed3e2474e9d2fdda2a2a742936c4f67edc1dbcaf27bb3f29714/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:54:58 compute-0 podman[105479]: 2026-01-23 09:54:58.794702132 +0000 UTC m=+0.148632727 container init fe8d6c4579831a636d3e77f858e391e199bfebad757246659e4095af852960c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:54:58 compute-0 podman[105479]: 2026-01-23 09:54:58.803536867 +0000 UTC m=+0.157467442 container start fe8d6c4579831a636d3e77f858e391e199bfebad757246659e4095af852960c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:54:58 compute-0 podman[105479]: 2026-01-23 09:54:58.807019773 +0000 UTC m=+0.160950378 container attach fe8d6c4579831a636d3e77f858e391e199bfebad757246659e4095af852960c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:54:58 compute-0 ceph-mon[74335]: pgmap v43: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 286 B/s rd, 0 op/s
Jan 23 09:54:58 compute-0 ceph-mon[74335]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Jan 23 09:54:59 compute-0 condescending_rhodes[105497]: --> passed data devices: 0 physical, 1 LVM
Jan 23 09:54:59 compute-0 condescending_rhodes[105497]: --> All data devices are unavailable
Jan 23 09:54:59 compute-0 systemd[1]: libpod-fe8d6c4579831a636d3e77f858e391e199bfebad757246659e4095af852960c8.scope: Deactivated successfully.
Jan 23 09:54:59 compute-0 podman[105479]: 2026-01-23 09:54:59.204490204 +0000 UTC m=+0.558420809 container died fe8d6c4579831a636d3e77f858e391e199bfebad757246659e4095af852960c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b6b6e590110ed3e2474e9d2fdda2a2a742936c4f67edc1dbcaf27bb3f29714-merged.mount: Deactivated successfully.
Jan 23 09:54:59 compute-0 podman[105479]: 2026-01-23 09:54:59.252625514 +0000 UTC m=+0.606556089 container remove fe8d6c4579831a636d3e77f858e391e199bfebad757246659e4095af852960c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_rhodes, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:54:59 compute-0 systemd[1]: libpod-conmon-fe8d6c4579831a636d3e77f858e391e199bfebad757246659e4095af852960c8.scope: Deactivated successfully.
Jan 23 09:54:59 compute-0 sudo[105374]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:59 compute-0 sudo[105522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:54:59 compute-0 sudo[105522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:59 compute-0 sudo[105522]: pam_unix(sudo:session): session closed for user root
Jan 23 09:54:59 compute-0 sudo[105547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 09:54:59 compute-0 sudo[105547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:54:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:59 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c30002e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Jan 23 09:54:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:54:59 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:54:59 compute-0 podman[105619]: 2026-01-23 09:54:59.89820683 +0000 UTC m=+0.051168575 container create ab47a17d26374ab5378e7ac8a9b7b52ce38074b5a10dfb7787ab374f23f71673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cohen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:54:59 compute-0 systemd[1]: Started libpod-conmon-ab47a17d26374ab5378e7ac8a9b7b52ce38074b5a10dfb7787ab374f23f71673.scope.
Jan 23 09:54:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:54:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:54:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:59.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:54:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:54:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:54:59] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:54:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:54:59] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:54:59 compute-0 podman[105619]: 2026-01-23 09:54:59.875468451 +0000 UTC m=+0.028430226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:54:59 compute-0 podman[105619]: 2026-01-23 09:54:59.99381071 +0000 UTC m=+0.146772475 container init ab47a17d26374ab5378e7ac8a9b7b52ce38074b5a10dfb7787ab374f23f71673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 09:55:00 compute-0 podman[105619]: 2026-01-23 09:55:00.003365484 +0000 UTC m=+0.156327229 container start ab47a17d26374ab5378e7ac8a9b7b52ce38074b5a10dfb7787ab374f23f71673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cohen, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:55:00 compute-0 heuristic_cohen[105635]: 167 167
Jan 23 09:55:00 compute-0 systemd[1]: libpod-ab47a17d26374ab5378e7ac8a9b7b52ce38074b5a10dfb7787ab374f23f71673.scope: Deactivated successfully.
Jan 23 09:55:00 compute-0 podman[105619]: 2026-01-23 09:55:00.020312612 +0000 UTC m=+0.173274387 container attach ab47a17d26374ab5378e7ac8a9b7b52ce38074b5a10dfb7787ab374f23f71673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 09:55:00 compute-0 podman[105619]: 2026-01-23 09:55:00.020796835 +0000 UTC m=+0.173758600 container died ab47a17d26374ab5378e7ac8a9b7b52ce38074b5a10dfb7787ab374f23f71673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cohen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e6f345201d2794f3ee9a589f07895455978a917805a727fcfd06a25cd4a7bc8-merged.mount: Deactivated successfully.
Jan 23 09:55:00 compute-0 podman[105619]: 2026-01-23 09:55:00.072608917 +0000 UTC m=+0.225570662 container remove ab47a17d26374ab5378e7ac8a9b7b52ce38074b5a10dfb7787ab374f23f71673 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_cohen, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:55:00 compute-0 systemd[1]: libpod-conmon-ab47a17d26374ab5378e7ac8a9b7b52ce38074b5a10dfb7787ab374f23f71673.scope: Deactivated successfully.
Jan 23 09:55:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:00.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:00 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:00 compute-0 podman[105670]: 2026-01-23 09:55:00.247773526 +0000 UTC m=+0.051834323 container create cfab4d59dc5e64c83dd59d02c2c93246dccbad399a7efea5dc2d71c262059fd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 09:55:00 compute-0 systemd[1]: Started libpod-conmon-cfab4d59dc5e64c83dd59d02c2c93246dccbad399a7efea5dc2d71c262059fd3.scope.
Jan 23 09:55:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:55:00 compute-0 podman[105670]: 2026-01-23 09:55:00.224709289 +0000 UTC m=+0.028770116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0b8ed2749a41ccffc6ea3ce3b9e166558192372c02c97534ea352707ea050c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0b8ed2749a41ccffc6ea3ce3b9e166558192372c02c97534ea352707ea050c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0b8ed2749a41ccffc6ea3ce3b9e166558192372c02c97534ea352707ea050c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0b8ed2749a41ccffc6ea3ce3b9e166558192372c02c97534ea352707ea050c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:00 compute-0 podman[105670]: 2026-01-23 09:55:00.376988076 +0000 UTC m=+0.181048903 container init cfab4d59dc5e64c83dd59d02c2c93246dccbad399a7efea5dc2d71c262059fd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 23 09:55:00 compute-0 podman[105670]: 2026-01-23 09:55:00.384526914 +0000 UTC m=+0.188587711 container start cfab4d59dc5e64c83dd59d02c2c93246dccbad399a7efea5dc2d71c262059fd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:55:00 compute-0 podman[105670]: 2026-01-23 09:55:00.388421562 +0000 UTC m=+0.192482359 container attach cfab4d59dc5e64c83dd59d02c2c93246dccbad399a7efea5dc2d71c262059fd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:55:00 compute-0 great_perlman[105687]: {
Jan 23 09:55:00 compute-0 great_perlman[105687]:     "1": [
Jan 23 09:55:00 compute-0 great_perlman[105687]:         {
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "devices": [
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "/dev/loop3"
Jan 23 09:55:00 compute-0 great_perlman[105687]:             ],
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "lv_name": "ceph_lv0",
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "lv_size": "21470642176",
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "name": "ceph_lv0",
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "tags": {
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.cluster_name": "ceph",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.crush_device_class": "",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.encrypted": "0",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.osd_id": "1",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.type": "block",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.vdo": "0",
Jan 23 09:55:00 compute-0 great_perlman[105687]:                 "ceph.with_tpm": "0"
Jan 23 09:55:00 compute-0 great_perlman[105687]:             },
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "type": "block",
Jan 23 09:55:00 compute-0 great_perlman[105687]:             "vg_name": "ceph_vg0"
Jan 23 09:55:00 compute-0 great_perlman[105687]:         }
Jan 23 09:55:00 compute-0 great_perlman[105687]:     ]
Jan 23 09:55:00 compute-0 great_perlman[105687]: }
Jan 23 09:55:00 compute-0 systemd[1]: libpod-cfab4d59dc5e64c83dd59d02c2c93246dccbad399a7efea5dc2d71c262059fd3.scope: Deactivated successfully.
Jan 23 09:55:00 compute-0 podman[105670]: 2026-01-23 09:55:00.70290169 +0000 UTC m=+0.506962487 container died cfab4d59dc5e64c83dd59d02c2c93246dccbad399a7efea5dc2d71c262059fd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 09:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c0b8ed2749a41ccffc6ea3ce3b9e166558192372c02c97534ea352707ea050c-merged.mount: Deactivated successfully.
Jan 23 09:55:00 compute-0 podman[105670]: 2026-01-23 09:55:00.763261268 +0000 UTC m=+0.567322065 container remove cfab4d59dc5e64c83dd59d02c2c93246dccbad399a7efea5dc2d71c262059fd3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_perlman, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:55:00 compute-0 systemd[1]: libpod-conmon-cfab4d59dc5e64c83dd59d02c2c93246dccbad399a7efea5dc2d71c262059fd3.scope: Deactivated successfully.
Jan 23 09:55:00 compute-0 sudo[105547]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:00 compute-0 sudo[105707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:55:00 compute-0 sudo[105707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:55:00 compute-0 sudo[105707]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:00 compute-0 sudo[105732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 09:55:00 compute-0 sudo[105732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:55:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:01 compute-0 ceph-mon[74335]: pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Jan 23 09:55:01 compute-0 podman[105796]: 2026-01-23 09:55:01.356982321 +0000 UTC m=+0.051728320 container create bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_pare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 09:55:01 compute-0 systemd[1]: Started libpod-conmon-bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9.scope.
Jan 23 09:55:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:55:01 compute-0 podman[105796]: 2026-01-23 09:55:01.338389137 +0000 UTC m=+0.033135166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:55:01 compute-0 podman[105796]: 2026-01-23 09:55:01.437757843 +0000 UTC m=+0.132503872 container init bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:55:01 compute-0 podman[105796]: 2026-01-23 09:55:01.444654823 +0000 UTC m=+0.139400822 container start bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_pare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:55:01 compute-0 sleepy_pare[105812]: 167 167
Jan 23 09:55:01 compute-0 podman[105796]: 2026-01-23 09:55:01.448279903 +0000 UTC m=+0.143025902 container attach bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:55:01 compute-0 systemd[1]: libpod-bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9.scope: Deactivated successfully.
Jan 23 09:55:01 compute-0 conmon[105812]: conmon bb7f7f6303f1417d4374 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9.scope/container/memory.events
Jan 23 09:55:01 compute-0 podman[105796]: 2026-01-23 09:55:01.449872057 +0000 UTC m=+0.144618076 container died bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_pare, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 09:55:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6abea682c765996630e18b687fbd5dd182cac8c6c1f142da752e51cde452865f-merged.mount: Deactivated successfully.
Jan 23 09:55:01 compute-0 podman[105796]: 2026-01-23 09:55:01.510685888 +0000 UTC m=+0.205431897 container remove bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_pare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:55:01 compute-0 systemd[1]: libpod-conmon-bb7f7f6303f1417d437457b1d66d4a64ebf8cab316642a294b6a1d2bdcdf94b9.scope: Deactivated successfully.
Jan 23 09:55:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:01 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:01 compute-0 podman[105836]: 2026-01-23 09:55:01.678116733 +0000 UTC m=+0.046132915 container create d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:55:01 compute-0 systemd[1]: Started libpod-conmon-d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183.scope.
Jan 23 09:55:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580e24bade5b1cdc20b22b3a9ad1570829493386c95821c781f431ef025de31a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580e24bade5b1cdc20b22b3a9ad1570829493386c95821c781f431ef025de31a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580e24bade5b1cdc20b22b3a9ad1570829493386c95821c781f431ef025de31a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580e24bade5b1cdc20b22b3a9ad1570829493386c95821c781f431ef025de31a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:01 compute-0 podman[105836]: 2026-01-23 09:55:01.760439458 +0000 UTC m=+0.128455670 container init d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:55:01 compute-0 podman[105836]: 2026-01-23 09:55:01.658697307 +0000 UTC m=+0.026713509 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:55:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v45: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 559 B/s rd, 0 op/s
Jan 23 09:55:01 compute-0 podman[105836]: 2026-01-23 09:55:01.771933035 +0000 UTC m=+0.139949217 container start d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 09:55:01 compute-0 podman[105836]: 2026-01-23 09:55:01.776396638 +0000 UTC m=+0.144412850 container attach d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_grothendieck, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:55:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:01 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c30002e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:01.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:02.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:02 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:02 compute-0 lvm[105931]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:55:02 compute-0 lvm[105931]: VG ceph_vg0 finished
Jan 23 09:55:02 compute-0 optimistic_grothendieck[105854]: {}
Jan 23 09:55:02 compute-0 systemd[1]: libpod-d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183.scope: Deactivated successfully.
Jan 23 09:55:02 compute-0 podman[105836]: 2026-01-23 09:55:02.64913195 +0000 UTC m=+1.017148152 container died d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:55:02 compute-0 systemd[1]: libpod-d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183.scope: Consumed 1.378s CPU time.
Jan 23 09:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-580e24bade5b1cdc20b22b3a9ad1570829493386c95821c781f431ef025de31a-merged.mount: Deactivated successfully.
Jan 23 09:55:02 compute-0 podman[105836]: 2026-01-23 09:55:02.706284059 +0000 UTC m=+1.074300241 container remove d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 09:55:02 compute-0 systemd[1]: libpod-conmon-d17f29200729cd1d8fbf5a155310c2a53724df72c5725ff05fd2bf3a67e44183.scope: Deactivated successfully.
Jan 23 09:55:02 compute-0 sudo[105732]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:55:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:55:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:55:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:55:02 compute-0 sudo[105950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:55:02 compute-0 sudo[105950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:55:02 compute-0 sudo[105950]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:03 compute-0 ceph-mon[74335]: pgmap v45: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 559 B/s rd, 0 op/s
Jan 23 09:55:03 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:55:03 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:55:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:03 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Jan 23 09:55:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:03 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 09:55:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 09:55:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:04 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c300042f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 23 09:55:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:55:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:55:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:55:05 compute-0 ceph-mon[74335]: pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Jan 23 09:55:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:55:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:55:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:05 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Jan 23 09:55:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:05 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:05.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:06.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:06 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:07 compute-0 ceph-mon[74335]: pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Jan 23 09:55:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:07 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c300042f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v48: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Jan 23 09:55:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:07 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:07.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:55:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:08.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:55:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:08 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:09 compute-0 ceph-mon[74335]: pgmap v48: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 279 B/s rd, 0 op/s
Jan 23 09:55:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:09 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c48002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:09 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c300042f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:09.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:09] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:09] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:10.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:10 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:11 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:11 compute-0 ceph-mon[74335]: pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:55:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:11 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:11.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:12.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:12 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:12 compute-0 ceph-mon[74335]: pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:55:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:13 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c24000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:13 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c44001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:13.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:55:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:14.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:55:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:14 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c300042f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:14 compute-0 ceph-mon[74335]: pgmap v51: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:15 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:15 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:15.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:16.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:16 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c44001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:16 compute-0 sudo[105991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:55:16 compute-0 sudo[105991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:55:16 compute-0 sudo[105991]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:17 compute-0 ceph-mon[74335]: pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:17 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c300042f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:17 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:17.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:18.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:18 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:19 compute-0 ceph-mon[74335]: pgmap v53: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:19 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:19 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c300042f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:55:19
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'volumes', 'backups', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 09:55:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:19] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:19] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:19.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:55:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:55:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:55:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:20.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:20 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:55:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:21 compute-0 ceph-mon[74335]: pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:21 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s
Jan 23 09:55:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:21 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:55:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:21.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:55:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:22.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:22 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200023c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:22 compute-0 ceph-mon[74335]: pgmap v55: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s
Jan 23 09:55:22 compute-0 sudo[102350]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:23 compute-0 sudo[106171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hypvwutcntbhocvsedveibunjwtyakhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162122.750983-364-40812276264443/AnsiballZ_command.py'
Jan 23 09:55:23 compute-0 sudo[106171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:23 compute-0 python3.9[106173]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:55:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:23 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c24002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:23 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:55:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:23.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:55:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 09:55:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:24.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 09:55:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:24 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c440022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:24 compute-0 sudo[106171]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:24 compute-0 ceph-mon[74335]: pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:25 compute-0 sudo[106460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtpybaqkaexjldavrsmwbgvgotkysvby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162124.4542422-388-253768127453388/AnsiballZ_selinux.py'
Jan 23 09:55:25 compute-0 sudo[106460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:25 compute-0 python3.9[106462]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 23 09:55:25 compute-0 sudo[106460]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:25 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c200023c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:25 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c24002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:25.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:26.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:26 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c28003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:26 compute-0 sudo[106614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfnftgycalcefwptrorhebsdqpjbjbnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162125.9257915-421-161986208002418/AnsiballZ_command.py'
Jan 23 09:55:26 compute-0 sudo[106614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:26 compute-0 python3.9[106616]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 23 09:55:26 compute-0 sudo[106614]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:26 compute-0 ceph-mon[74335]: pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:27 compute-0 sudo[106766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcglfkpfuhcyvnglinwvccozkaqnsvwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162126.733589-445-257078261451206/AnsiballZ_file.py'
Jan 23 09:55:27 compute-0 sudo[106766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:27 compute-0 python3.9[106768]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:55:27 compute-0 sudo[106766]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c44003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:27 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:27 compute-0 sudo[106919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmzguzzmbgqefnoblxpkqqxodhxsxpgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162127.4531643-469-28553054091000/AnsiballZ_mount.py'
Jan 23 09:55:27 compute-0 sudo[106919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:55:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:27.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:55:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:28.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[101600]: 23/01/2026 09:55:28 : epoch 69734553 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c24002720 fd 38 proxy ignored for local
Jan 23 09:55:28 compute-0 kernel: ganesha.nfsd[105985]: segfault at 50 ip 00007f1cd2cda32e sp 00007f1c5a7fb210 error 4 in libntirpc.so.5.8[7f1cd2cbf000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 23 09:55:28 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 09:55:28 compute-0 python3.9[106921]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 23 09:55:28 compute-0 sudo[106919]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:28 compute-0 systemd[1]: Started Process Core Dump (PID 106923/UID 0).
Jan 23 09:55:29 compute-0 sudo[107074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sglfqjkdirfknrmphyhhulqvnfrpnzfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162129.2470796-553-254525926876610/AnsiballZ_file.py'
Jan 23 09:55:29 compute-0 sudo[107074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:29 compute-0 ceph-mon[74335]: pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:29 compute-0 systemd-coredump[106924]: Process 101605 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007f1cd2cda32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 09:55:29 compute-0 systemd[1]: systemd-coredump@1-106923-0.service: Deactivated successfully.
Jan 23 09:55:29 compute-0 systemd[1]: systemd-coredump@1-106923-0.service: Consumed 1.442s CPU time.
Jan 23 09:55:29 compute-0 python3.9[107076]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:55:29 compute-0 sudo[107074]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:29 compute-0 podman[107082]: 2026-01-23 09:55:29.824906885 +0000 UTC m=+0.049886318 container died 8b07b0a91308a280be87c57a40f5eda65a176fbdaf3393b6b42491735a49ec88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8207052a3a812ab6a0f8a2480b8b48dd3ce3bb97f631979d30113cde1d081d4-merged.mount: Deactivated successfully.
Jan 23 09:55:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:29] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:29] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:29.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:30 compute-0 podman[107082]: 2026-01-23 09:55:30.000654429 +0000 UTC m=+0.225633832 container remove 8b07b0a91308a280be87c57a40f5eda65a176fbdaf3393b6b42491735a49ec88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 09:55:30 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 09:55:30 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 09:55:30 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.845s CPU time.
Jan 23 09:55:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:30.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:30 compute-0 ceph-mon[74335]: pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:55:30 compute-0 sudo[107272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvjvklaazkpnysijmtsvgwwsfhcmasgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162130.509741-577-82701231173999/AnsiballZ_stat.py'
Jan 23 09:55:30 compute-0 sudo[107272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:31 compute-0 python3.9[107274]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:55:31 compute-0 sudo[107272]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:31 compute-0 sudo[107350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyzlluvkhxbntbxqiumdrtvritomdgbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162130.509741-577-82701231173999/AnsiballZ_file.py'
Jan 23 09:55:31 compute-0 sudo[107350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:31 compute-0 python3.9[107352]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:55:31 compute-0 sudo[107350]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 0 op/s
Jan 23 09:55:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:31.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:32.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:32 compute-0 sudo[107504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctkglfdinprswlmavermeyldlncxrsml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162132.4024174-640-73785263357814/AnsiballZ_stat.py'
Jan 23 09:55:32 compute-0 sudo[107504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:32 compute-0 python3.9[107506]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:55:32 compute-0 sudo[107504]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:33 compute-0 ceph-mon[74335]: pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 0 op/s
Jan 23 09:55:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:55:33 compute-0 sudo[107659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cswkrbrunroihuvykdmqmrhtakfedvzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162133.5078015-679-96575880172554/AnsiballZ_getent.py'
Jan 23 09:55:33 compute-0 sudo[107659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:33.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:34 compute-0 python3.9[107661]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 23 09:55:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:34.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095534 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:55:34 compute-0 sudo[107659]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:34 compute-0 ceph-mon[74335]: pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:55:34 compute-0 sudo[107813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-legdcvkyoyddxtuywunrhirmmblvmjrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162134.4852846-709-228707510432928/AnsiballZ_getent.py'
Jan 23 09:55:34 compute-0 sudo[107813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:34 compute-0 python3.9[107815]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 23 09:55:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:55:34 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:55:35 compute-0 sudo[107813]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:35 compute-0 sudo[107967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvlhnlskjvjitovjwnzrjehgvifcwsmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162135.2016194-733-248421476574466/AnsiballZ_group.py'
Jan 23 09:55:35 compute-0 sudo[107967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:55:35 compute-0 python3.9[107969]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 09:55:35 compute-0 sudo[107967]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:36.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:55:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:36.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:36 compute-0 sudo[108120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkziehujnocaxikmjntrwvxbuepevpxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162136.2361083-760-72659609501143/AnsiballZ_file.py'
Jan 23 09:55:36 compute-0 sudo[108120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:36 compute-0 python3.9[108122]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 23 09:55:36 compute-0 sudo[108120]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:37 compute-0 sudo[108147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:55:37 compute-0 sudo[108147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:55:37 compute-0 sudo[108147]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:37 compute-0 ceph-mon[74335]: pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:55:37 compute-0 sudo[108297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwwidshfbljgpqxhxyzmtaufmwqegwpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162137.3053744-793-217614180703888/AnsiballZ_dnf.py'
Jan 23 09:55:37 compute-0 sudo[108297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:55:37 compute-0 python3.9[108299]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:55:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:38.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:39 compute-0 ceph-mon[74335]: pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:55:39 compute-0 sudo[108297]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:55:39 compute-0 sudo[108453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzwgdxyrqrltaixzfimiijmyalmegsyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162139.56344-817-246774672130058/AnsiballZ_file.py'
Jan 23 09:55:39 compute-0 sudo[108453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:39] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:39] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:55:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:40.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:55:40 compute-0 python3.9[108455]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:55:40 compute-0 sudo[108453]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:40.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:40 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 2.
Jan 23 09:55:40 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:55:40 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.845s CPU time.
Jan 23 09:55:40 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:55:40 compute-0 ceph-mon[74335]: pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:55:40 compute-0 podman[108619]: 2026-01-23 09:55:40.561306194 +0000 UTC m=+0.045823623 container create e140f2ef7c4c06043c5a4cfd1a05e8a7ae03ba7e17d3e9ae6f49aac4918f0373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 23 09:55:40 compute-0 sudo[108663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpzzdvdbvlchrtdcdwxjmmzuzyazdqhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162140.3018599-841-117193592960408/AnsiballZ_stat.py'
Jan 23 09:55:40 compute-0 sudo[108663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe70ddda811a516770f3ecca53ab8fb4293004788661c4eef57e6c5e6671292/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe70ddda811a516770f3ecca53ab8fb4293004788661c4eef57e6c5e6671292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe70ddda811a516770f3ecca53ab8fb4293004788661c4eef57e6c5e6671292/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe70ddda811a516770f3ecca53ab8fb4293004788661c4eef57e6c5e6671292/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:55:40 compute-0 podman[108619]: 2026-01-23 09:55:40.62768055 +0000 UTC m=+0.112197979 container init e140f2ef7c4c06043c5a4cfd1a05e8a7ae03ba7e17d3e9ae6f49aac4918f0373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 09:55:40 compute-0 podman[108619]: 2026-01-23 09:55:40.634278898 +0000 UTC m=+0.118796327 container start e140f2ef7c4c06043c5a4cfd1a05e8a7ae03ba7e17d3e9ae6f49aac4918f0373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:55:40 compute-0 podman[108619]: 2026-01-23 09:55:40.542045587 +0000 UTC m=+0.026563036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:55:40 compute-0 bash[108619]: e140f2ef7c4c06043c5a4cfd1a05e8a7ae03ba7e17d3e9ae6f49aac4918f0373
Jan 23 09:55:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:40 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 09:55:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:40 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 09:55:40 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:55:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:40 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 09:55:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:40 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 09:55:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:40 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 09:55:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:40 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 09:55:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:40 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 09:55:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:40 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:55:40 compute-0 python3.9[108669]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:55:40 compute-0 sudo[108663]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:41 compute-0 sudo[108785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrqyogmgcnyuozggikcqwnpkwolkoqhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162140.3018599-841-117193592960408/AnsiballZ_file.py'
Jan 23 09:55:41 compute-0 sudo[108785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:41 compute-0 python3.9[108787]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:55:41 compute-0 sudo[108785]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:41 compute-0 sudo[108938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pymxyhstghdtyhqfsaruywazfdgotrlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162141.5419917-880-132416184376503/AnsiballZ_stat.py'
Jan 23 09:55:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:55:41 compute-0 sudo[108938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:42 compute-0 python3.9[108940]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:55:42 compute-0 sudo[108938]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:42 compute-0 sudo[109017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psbjebwcwktleniccitbuuckmaillaet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162141.5419917-880-132416184376503/AnsiballZ_file.py'
Jan 23 09:55:42 compute-0 sudo[109017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:42 compute-0 python3.9[109019]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:55:42 compute-0 sudo[109017]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:43 compute-0 sudo[109169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgzjaoytouqmxraiukhnkytmhkxghils ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162143.0100293-925-16351127916847/AnsiballZ_dnf.py'
Jan 23 09:55:43 compute-0 sudo[109169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:43 compute-0 ceph-mon[74335]: pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:55:43 compute-0 python3.9[109171]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:55:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:55:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:44.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:44.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:44 compute-0 ceph-mon[74335]: pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:44.940079) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162144940506, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1832, "num_deletes": 251, "total_data_size": 5462630, "memory_usage": 5788792, "flush_reason": "Manual Compaction"}
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162144986441, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 5055840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9086, "largest_seqno": 10917, "table_properties": {"data_size": 5047090, "index_size": 5308, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19703, "raw_average_key_size": 20, "raw_value_size": 5028702, "raw_average_value_size": 5332, "num_data_blocks": 236, "num_entries": 943, "num_filter_entries": 943, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162058, "oldest_key_time": 1769162058, "file_creation_time": 1769162144, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 46861 microseconds, and 18429 cpu microseconds.
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:44.987033) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 5055840 bytes OK
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:44.987085) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:44.989885) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:44.990152) EVENT_LOG_v1 {"time_micros": 1769162144990136, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:44.990188) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 5454259, prev total WAL file size 5454259, number of live WAL files 2.
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:44.992610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(4937KB)], [23(11MB)]
Jan 23 09:55:44 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162144992830, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 17496545, "oldest_snapshot_seqno": -1}
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4118 keys, 13781570 bytes, temperature: kUnknown
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162145231450, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 13781570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13747992, "index_size": 22204, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 104928, "raw_average_key_size": 25, "raw_value_size": 13666583, "raw_average_value_size": 3318, "num_data_blocks": 954, "num_entries": 4118, "num_filter_entries": 4118, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769162144, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:45.231868) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13781570 bytes
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:45.304460) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.3 rd, 57.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.8, 11.9 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 4652, records dropped: 534 output_compression: NoCompression
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:45.304525) EVENT_LOG_v1 {"time_micros": 1769162145304498, "job": 8, "event": "compaction_finished", "compaction_time_micros": 238743, "compaction_time_cpu_micros": 42641, "output_level": 6, "num_output_files": 1, "total_output_size": 13781570, "num_input_records": 4652, "num_output_records": 4118, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162145305546, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162145307838, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:44.992194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:45.307920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:45.307926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:45.307927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:45.307929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:55:45 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:55:45.307931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:55:45 compute-0 sudo[109169]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:55:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:46.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:46.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:46 compute-0 python3.9[109326]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:55:46 compute-0 ceph-mon[74335]: pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:55:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:47 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:55:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:47 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:55:47 compute-0 ceph-mgr[74633]: [dashboard INFO request] [192.168.122.100:49912] [POST] [200] [0.157s] [4.0B] [fe8f679a-6504-4159-907e-d5f272d77636] /api/prometheus_receiver
Jan 23 09:55:47 compute-0 python3.9[109478]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 23 09:55:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:55:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:48 compute-0 python3.9[109629]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:55:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:48.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:48 compute-0 ceph-mon[74335]: pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:55:49 compute-0 sudo[109780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuuaewyjfscowhqifylufvfjkyoplbuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162148.7542615-1048-264195427271980/AnsiballZ_systemd.py'
Jan 23 09:55:49 compute-0 sudo[109780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:55:49 compute-0 python3.9[109782]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:55:49 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 23 09:55:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:49] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:49] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Jan 23 09:55:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:55:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:55:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:55:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:50.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:55:50 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 23 09:55:50 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 23 09:55:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:55:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f2833272fd0>)]
Jan 23 09:55:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:55:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:55:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:55:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f2833265070>)]
Jan 23 09:55:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 09:55:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 23 09:55:50 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 09:55:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:50.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:50 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 09:55:50 compute-0 sudo[109780]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:50 compute-0 ceph-mon[74335]: pgmap v69: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:55:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:55:51 compute-0 python3.9[109946]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 23 09:55:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 23 09:55:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:52.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:52 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.nbdygh(active, since 92s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:55:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:55:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:52.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:55:53 compute-0 ceph-mon[74335]: pgmap v70: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 23 09:55:53 compute-0 ceph-mon[74335]: mgrmap e32: compute-0.nbdygh(active, since 92s), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6298000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 09:55:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 2 op/s
Jan 23 09:55:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:53 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62900014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:54.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:54 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6278000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:54.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:54 compute-0 ceph-mon[74335]: pgmap v71: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 2 op/s
Jan 23 09:55:55 compute-0 sudo[110115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctrznsvyxiaozrxrfcmczqrmlovmznve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162155.1735694-1219-148054938203695/AnsiballZ_systemd.py'
Jan 23 09:55:55 compute-0 sudo[110115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:55 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62900014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:55 compute-0 python3.9[110117]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:55:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 2 op/s
Jan 23 09:55:55 compute-0 sudo[110115]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:55 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6284000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:56.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095556 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:55:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:56 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f627c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:56.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:56 compute-0 sudo[110271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmkxnbikggkavzfzujpcjuifdgduhvvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162156.0002637-1219-174890915153705/AnsiballZ_systemd.py'
Jan 23 09:55:56 compute-0 sudo[110271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:55:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:55:56 compute-0 python3.9[110273]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:55:56 compute-0 sudo[110271]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:56 compute-0 ceph-mon[74335]: pgmap v72: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.2 KiB/s wr, 2 op/s
Jan 23 09:55:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:55:56.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:55:57 compute-0 sudo[110300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:55:57 compute-0 sudo[110300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:55:57 compute-0 sudo[110300]: pam_unix(sudo:session): session closed for user root
Jan 23 09:55:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:57 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 23 09:55:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:57 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62900025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:55:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:58.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:55:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:58 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6284001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:55:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:55:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:58.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:55:58 compute-0 sshd-session[98665]: Connection closed by 192.168.122.30 port 55958
Jan 23 09:55:58 compute-0 sshd-session[98661]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:55:58 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 23 09:55:58 compute-0 systemd[1]: session-38.scope: Consumed 1min 16.759s CPU time.
Jan 23 09:55:58 compute-0 systemd-logind[784]: Session 38 logged out. Waiting for processes to exit.
Jan 23 09:55:58 compute-0 systemd-logind[784]: Removed session 38.
Jan 23 09:55:58 compute-0 ceph-mon[74335]: pgmap v73: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 23 09:55:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:59 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f627c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 09:55:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:55:59 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:55:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:59] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Jan 23 09:55:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:55:59] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Jan 23 09:56:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:00.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:00 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:00.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:00 compute-0 ceph-mon[74335]: pgmap v74: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 09:56:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:01 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6284001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 09:56:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:01 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f627c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:02.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:02 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62900025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:02.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:03 compute-0 sudo[110331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:56:03 compute-0 sudo[110331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:03 compute-0 sudo[110331]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:03 compute-0 sudo[110356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 09:56:03 compute-0 sudo[110356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:03 compute-0 ceph-mon[74335]: pgmap v75: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 09:56:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:03 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:56:03 compute-0 podman[110454]: 2026-01-23 09:56:03.833949741 +0000 UTC m=+0.119294561 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:56:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:03 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6284001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:03 compute-0 podman[110454]: 2026-01-23 09:56:03.979683052 +0000 UTC m=+0.265027872 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 09:56:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:04.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:04 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6284001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:04.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:04 compute-0 podman[110572]: 2026-01-23 09:56:04.853542481 +0000 UTC m=+0.170567978 container exec 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:56:04 compute-0 podman[110572]: 2026-01-23 09:56:04.867912189 +0000 UTC m=+0.184937636 container exec_died 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:56:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:56:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:56:05 compute-0 podman[110662]: 2026-01-23 09:56:05.217552153 +0000 UTC m=+0.050865196 container exec e140f2ef7c4c06043c5a4cfd1a05e8a7ae03ba7e17d3e9ae6f49aac4918f0373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 09:56:05 compute-0 podman[110662]: 2026-01-23 09:56:05.229953206 +0000 UTC m=+0.063266229 container exec_died e140f2ef7c4c06043c5a4cfd1a05e8a7ae03ba7e17d3e9ae6f49aac4918f0373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 09:56:05 compute-0 ceph-mon[74335]: pgmap v76: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:56:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:56:05 compute-0 podman[110723]: 2026-01-23 09:56:05.45464781 +0000 UTC m=+0.059148272 container exec 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 09:56:05 compute-0 podman[110723]: 2026-01-23 09:56:05.474721321 +0000 UTC m=+0.079221753 container exec_died 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 09:56:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:05 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62900025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:56:05 compute-0 podman[110786]: 2026-01-23 09:56:05.869671693 +0000 UTC m=+0.243623084 container exec 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 23 09:56:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:05 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6278002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:06 compute-0 podman[110807]: 2026-01-23 09:56:06.011582565 +0000 UTC m=+0.106280851 container exec_died 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, version=2.2.4)
Jan 23 09:56:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:06.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:06 compute-0 podman[110786]: 2026-01-23 09:56:06.062738408 +0000 UTC m=+0.436689769 container exec_died 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, io.openshift.tags=Ceph keepalived, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container)
Jan 23 09:56:06 compute-0 sshd-session[110818]: Accepted publickey for zuul from 192.168.122.30 port 43832 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:56:06 compute-0 systemd-logind[784]: New session 40 of user zuul.
Jan 23 09:56:06 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 23 09:56:06 compute-0 sshd-session[110818]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:56:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:06 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f627c0028c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:06.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:06 compute-0 podman[110855]: 2026-01-23 09:56:06.536995934 +0000 UTC m=+0.227923837 container exec a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:56:06 compute-0 podman[110937]: 2026-01-23 09:56:06.690576278 +0000 UTC m=+0.109724729 container exec_died a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:56:06 compute-0 ceph-mon[74335]: pgmap v77: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:56:06 compute-0 podman[110855]: 2026-01-23 09:56:06.89899349 +0000 UTC m=+0.589921393 container exec_died a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:56:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:56:06.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 09:56:07 compute-0 podman[111078]: 2026-01-23 09:56:07.373131082 +0000 UTC m=+0.077923495 container exec 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:56:07 compute-0 python3.9[111047]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:56:07 compute-0 podman[111078]: 2026-01-23 09:56:07.610605869 +0000 UTC m=+0.315398262 container exec_died 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 09:56:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:07 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6284003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:56:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:07 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62900025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:08.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:08 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6278002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:08.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:08 compute-0 podman[111214]: 2026-01-23 09:56:08.483412007 +0000 UTC m=+0.473829293 container exec 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:56:08 compute-0 podman[111246]: 2026-01-23 09:56:08.639667847 +0000 UTC m=+0.089538205 container exec_died 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:56:08 compute-0 podman[111214]: 2026-01-23 09:56:08.65139995 +0000 UTC m=+0.641817226 container exec_died 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 09:56:08 compute-0 sudo[110356]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:56:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:09 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f627c0028c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:56:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:09 compute-0 ceph-mon[74335]: pgmap v78: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:56:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:09 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6284003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:09] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 23 09:56:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:09] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 23 09:56:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:10.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:10 compute-0 sudo[111383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsknntsexurbkutybsblbxwepjfxaogt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162169.5834672-63-255824943187299/AnsiballZ_getent.py'
Jan 23 09:56:10 compute-0 sudo[111383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:10 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62900025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:10.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:10 compute-0 sudo[111386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:56:10 compute-0 sudo[111386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:10 compute-0 sudo[111386]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:10 compute-0 python3.9[111385]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 23 09:56:10 compute-0 sudo[111411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 09:56:10 compute-0 sudo[111411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:10 compute-0 sudo[111383]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:10 compute-0 sudo[111411]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:56:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:56:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:56:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:56:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:56:11 compute-0 sudo[111616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgyignizfbcijyazwlriexdqghsvmgxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162170.7323835-99-58596127288555/AnsiballZ_setup.py'
Jan 23 09:56:11 compute-0 sudo[111616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:11 compute-0 python3.9[111618]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:56:11 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:11 compute-0 ceph-mon[74335]: pgmap v79: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:11 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:11 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6278003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:11 compute-0 sudo[111616]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=infra.usagestats t=2026-01-23T09:56:11.662150306Z level=info msg="Usage stats are ready to report"
Jan 23 09:56:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:56:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:56:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:11 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f627c0028c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:12.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:12 compute-0 sudo[111702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqxqcrctguyphribnqkgomzsqicgmak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162170.7323835-99-58596127288555/AnsiballZ_dnf.py'
Jan 23 09:56:12 compute-0 sudo[111702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 09:56:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:56:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:56:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:56:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:56:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:56:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:12 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6284003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:12 compute-0 sudo[111705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:56:12 compute-0 sudo[111705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:12 compute-0 sudo[111705]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:12.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:12 compute-0 sudo[111730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 09:56:12 compute-0 sudo[111730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:12 compute-0 python3.9[111704]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 09:56:12 compute-0 podman[111798]: 2026-01-23 09:56:12.697955077 +0000 UTC m=+0.024885138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:56:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:56:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:56:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:12 compute-0 ceph-mon[74335]: pgmap v80: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:56:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:56:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:56:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:56:12 compute-0 podman[111798]: 2026-01-23 09:56:12.962529785 +0000 UTC m=+0.289459826 container create c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_jones, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 09:56:13 compute-0 systemd[1]: Started libpod-conmon-c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c.scope.
Jan 23 09:56:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:56:13 compute-0 podman[111798]: 2026-01-23 09:56:13.133872173 +0000 UTC m=+0.460802214 container init c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_jones, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:56:13 compute-0 podman[111798]: 2026-01-23 09:56:13.142305013 +0000 UTC m=+0.469235054 container start c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_jones, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 09:56:13 compute-0 happy_jones[111814]: 167 167
Jan 23 09:56:13 compute-0 systemd[1]: libpod-c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c.scope: Deactivated successfully.
Jan 23 09:56:13 compute-0 conmon[111814]: conmon c191c5dc155b2995a30c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c.scope/container/memory.events
Jan 23 09:56:13 compute-0 podman[111798]: 2026-01-23 09:56:13.228165232 +0000 UTC m=+0.555095273 container attach c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_jones, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:56:13 compute-0 podman[111798]: 2026-01-23 09:56:13.229871941 +0000 UTC m=+0.556801992 container died c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dfb86f8e5cf2643dd9e5702e384de4808e928f275b4a26dcd08e42462026320-merged.mount: Deactivated successfully.
Jan 23 09:56:13 compute-0 podman[111798]: 2026-01-23 09:56:13.498286017 +0000 UTC m=+0.825216058 container remove c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_jones, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:56:13 compute-0 systemd[1]: libpod-conmon-c191c5dc155b2995a30cfce482bdf8849e3465313b382bd9d7e8c4af28464c8c.scope: Deactivated successfully.
Jan 23 09:56:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:13 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62900025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:13 compute-0 podman[111841]: 2026-01-23 09:56:13.678608341 +0000 UTC m=+0.048731626 container create 1279be3d2a301fcf758bd76215ccf498ac50d72960cccb4e708a878273e5a677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:56:13 compute-0 systemd[1]: Started libpod-conmon-1279be3d2a301fcf758bd76215ccf498ac50d72960cccb4e708a878273e5a677.scope.
Jan 23 09:56:13 compute-0 podman[111841]: 2026-01-23 09:56:13.657054159 +0000 UTC m=+0.027177464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:56:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0f24de2bed970ede8c1be0ddbcd28046dfb7f6985726338363784f8b5369c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0f24de2bed970ede8c1be0ddbcd28046dfb7f6985726338363784f8b5369c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0f24de2bed970ede8c1be0ddbcd28046dfb7f6985726338363784f8b5369c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0f24de2bed970ede8c1be0ddbcd28046dfb7f6985726338363784f8b5369c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0f24de2bed970ede8c1be0ddbcd28046dfb7f6985726338363784f8b5369c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:13 compute-0 podman[111841]: 2026-01-23 09:56:13.779772511 +0000 UTC m=+0.149895816 container init 1279be3d2a301fcf758bd76215ccf498ac50d72960cccb4e708a878273e5a677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:56:13 compute-0 podman[111841]: 2026-01-23 09:56:13.790292743 +0000 UTC m=+0.160416028 container start 1279be3d2a301fcf758bd76215ccf498ac50d72960cccb4e708a878273e5a677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:56:13 compute-0 podman[111841]: 2026-01-23 09:56:13.795537463 +0000 UTC m=+0.165660748 container attach 1279be3d2a301fcf758bd76215ccf498ac50d72960cccb4e708a878273e5a677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 09:56:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:13 compute-0 sudo[111702]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:13 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6278003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:14.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:14 compute-0 optimistic_hugle[111858]: --> passed data devices: 0 physical, 1 LVM
Jan 23 09:56:14 compute-0 optimistic_hugle[111858]: --> All data devices are unavailable
Jan 23 09:56:14 compute-0 systemd[1]: libpod-1279be3d2a301fcf758bd76215ccf498ac50d72960cccb4e708a878273e5a677.scope: Deactivated successfully.
Jan 23 09:56:14 compute-0 podman[111841]: 2026-01-23 09:56:14.181823898 +0000 UTC m=+0.551947183 container died 1279be3d2a301fcf758bd76215ccf498ac50d72960cccb4e708a878273e5a677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hugle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:56:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:14 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6278003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:14.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca0f24de2bed970ede8c1be0ddbcd28046dfb7f6985726338363784f8b5369c8-merged.mount: Deactivated successfully.
Jan 23 09:56:14 compute-0 podman[111841]: 2026-01-23 09:56:14.288786814 +0000 UTC m=+0.658910099 container remove 1279be3d2a301fcf758bd76215ccf498ac50d72960cccb4e708a878273e5a677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:56:14 compute-0 systemd[1]: libpod-conmon-1279be3d2a301fcf758bd76215ccf498ac50d72960cccb4e708a878273e5a677.scope: Deactivated successfully.
Jan 23 09:56:14 compute-0 sudo[111730]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:14 compute-0 sudo[111911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:56:14 compute-0 sudo[111911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:14 compute-0 sudo[111911]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:14 compute-0 sudo[111959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 09:56:14 compute-0 sudo[111959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:14 compute-0 sudo[112088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvtpeukmkaxogeledphnklcbvyklxusi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162174.4251657-141-24121938035094/AnsiballZ_dnf.py'
Jan 23 09:56:14 compute-0 sudo[112088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:14 compute-0 python3.9[112099]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:56:14 compute-0 podman[112128]: 2026-01-23 09:56:14.964436084 +0000 UTC m=+0.111943090 container create d37aa5c74eca6d1dc1f32a291e583ab233d0fab5bb3b7f0203158742e9afa815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 09:56:14 compute-0 podman[112128]: 2026-01-23 09:56:14.876066231 +0000 UTC m=+0.023573237 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:56:15 compute-0 systemd[1]: Started libpod-conmon-d37aa5c74eca6d1dc1f32a291e583ab233d0fab5bb3b7f0203158742e9afa815.scope.
Jan 23 09:56:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:56:15 compute-0 podman[112128]: 2026-01-23 09:56:15.052043206 +0000 UTC m=+0.199550232 container init d37aa5c74eca6d1dc1f32a291e583ab233d0fab5bb3b7f0203158742e9afa815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 23 09:56:15 compute-0 podman[112128]: 2026-01-23 09:56:15.061891468 +0000 UTC m=+0.209398484 container start d37aa5c74eca6d1dc1f32a291e583ab233d0fab5bb3b7f0203158742e9afa815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_perlman, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:56:15 compute-0 youthful_perlman[112146]: 167 167
Jan 23 09:56:15 compute-0 systemd[1]: libpod-d37aa5c74eca6d1dc1f32a291e583ab233d0fab5bb3b7f0203158742e9afa815.scope: Deactivated successfully.
Jan 23 09:56:15 compute-0 podman[112128]: 2026-01-23 09:56:15.070095713 +0000 UTC m=+0.217602739 container attach d37aa5c74eca6d1dc1f32a291e583ab233d0fab5bb3b7f0203158742e9afa815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 09:56:15 compute-0 podman[112128]: 2026-01-23 09:56:15.072154562 +0000 UTC m=+0.219661568 container died d37aa5c74eca6d1dc1f32a291e583ab233d0fab5bb3b7f0203158742e9afa815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_perlman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 09:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-54ba24c349793e9870490ff721332dfb40292356e356b6edd22610e663928478-merged.mount: Deactivated successfully.
Jan 23 09:56:15 compute-0 ceph-mon[74335]: pgmap v81: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:15 compute-0 podman[112128]: 2026-01-23 09:56:15.48197804 +0000 UTC m=+0.629485046 container remove d37aa5c74eca6d1dc1f32a291e583ab233d0fab5bb3b7f0203158742e9afa815 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_perlman, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:56:15 compute-0 systemd[1]: libpod-conmon-d37aa5c74eca6d1dc1f32a291e583ab233d0fab5bb3b7f0203158742e9afa815.scope: Deactivated successfully.
Jan 23 09:56:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:15 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6284003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:15 compute-0 podman[112169]: 2026-01-23 09:56:15.650387908 +0000 UTC m=+0.044345612 container create 5e013f3b67cbe6299c79b5fd3f3e1d4266e52cf9ff806d10059ffe1e9e3de83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 09:56:15 compute-0 systemd[1]: Started libpod-conmon-5e013f3b67cbe6299c79b5fd3f3e1d4266e52cf9ff806d10059ffe1e9e3de83c.scope.
Jan 23 09:56:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:56:15 compute-0 podman[112169]: 2026-01-23 09:56:15.631412874 +0000 UTC m=+0.025370608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc78def2f49e03f637df4417f8d260dc47705a98abd8556abb30be5a7c6a70b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc78def2f49e03f637df4417f8d260dc47705a98abd8556abb30be5a7c6a70b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc78def2f49e03f637df4417f8d260dc47705a98abd8556abb30be5a7c6a70b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc78def2f49e03f637df4417f8d260dc47705a98abd8556abb30be5a7c6a70b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:15 compute-0 podman[112169]: 2026-01-23 09:56:15.742395446 +0000 UTC m=+0.136353170 container init 5e013f3b67cbe6299c79b5fd3f3e1d4266e52cf9ff806d10059ffe1e9e3de83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:56:15 compute-0 podman[112169]: 2026-01-23 09:56:15.748784249 +0000 UTC m=+0.142741953 container start 5e013f3b67cbe6299c79b5fd3f3e1d4266e52cf9ff806d10059ffe1e9e3de83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 09:56:15 compute-0 podman[112169]: 2026-01-23 09:56:15.751841647 +0000 UTC m=+0.145799371 container attach 5e013f3b67cbe6299c79b5fd3f3e1d4266e52cf9ff806d10059ffe1e9e3de83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 09:56:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[108667]: 23/01/2026 09:56:15 : epoch 6973459c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62900025c0 fd 38 proxy ignored for local
Jan 23 09:56:15 compute-0 kernel: ganesha.nfsd[109976]: segfault at 50 ip 00007f63229e332e sp 00007f62cd7f9210 error 4 in libntirpc.so.5.8[7f63229c8000+2c000] likely on CPU 1 (core 0, socket 1)
Jan 23 09:56:15 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 09:56:15 compute-0 systemd[1]: Started Process Core Dump (PID 112193/UID 0).
Jan 23 09:56:16 compute-0 compassionate_payne[112186]: {
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:     "1": [
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:         {
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "devices": [
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "/dev/loop3"
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             ],
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "lv_name": "ceph_lv0",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "lv_size": "21470642176",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "name": "ceph_lv0",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "tags": {
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.cluster_name": "ceph",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.crush_device_class": "",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.encrypted": "0",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.osd_id": "1",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.type": "block",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.vdo": "0",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:                 "ceph.with_tpm": "0"
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             },
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "type": "block",
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:             "vg_name": "ceph_vg0"
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:         }
Jan 23 09:56:16 compute-0 compassionate_payne[112186]:     ]
Jan 23 09:56:16 compute-0 compassionate_payne[112186]: }
Jan 23 09:56:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:16.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:16 compute-0 systemd[1]: libpod-5e013f3b67cbe6299c79b5fd3f3e1d4266e52cf9ff806d10059ffe1e9e3de83c.scope: Deactivated successfully.
Jan 23 09:56:16 compute-0 podman[112169]: 2026-01-23 09:56:16.08120252 +0000 UTC m=+0.475160224 container died 5e013f3b67cbe6299c79b5fd3f3e1d4266e52cf9ff806d10059ffe1e9e3de83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfc78def2f49e03f637df4417f8d260dc47705a98abd8556abb30be5a7c6a70b-merged.mount: Deactivated successfully.
Jan 23 09:56:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:16.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:16 compute-0 podman[112169]: 2026-01-23 09:56:16.380561492 +0000 UTC m=+0.774519196 container remove 5e013f3b67cbe6299c79b5fd3f3e1d4266e52cf9ff806d10059ffe1e9e3de83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_payne, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:56:16 compute-0 systemd[1]: libpod-conmon-5e013f3b67cbe6299c79b5fd3f3e1d4266e52cf9ff806d10059ffe1e9e3de83c.scope: Deactivated successfully.
Jan 23 09:56:16 compute-0 sudo[111959]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:16 compute-0 ceph-mon[74335]: pgmap v82: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:16 compute-0 sudo[112088]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:16 compute-0 sudo[112210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:56:16 compute-0 sudo[112210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:16 compute-0 sudo[112210]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:16 compute-0 sudo[112239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 09:56:16 compute-0 sudo[112239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:56:16.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:56:17 compute-0 podman[112374]: 2026-01-23 09:56:17.046973367 +0000 UTC m=+0.047813232 container create 2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:56:17 compute-0 podman[112374]: 2026-01-23 09:56:17.02685729 +0000 UTC m=+0.027697185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:56:17 compute-0 sudo[112388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:56:17 compute-0 sudo[112388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:17 compute-0 sudo[112388]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:17 compute-0 systemd[1]: Started libpod-conmon-2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724.scope.
Jan 23 09:56:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:56:17 compute-0 podman[112374]: 2026-01-23 09:56:17.372921382 +0000 UTC m=+0.373761267 container init 2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 09:56:17 compute-0 podman[112374]: 2026-01-23 09:56:17.381978451 +0000 UTC m=+0.382818326 container start 2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:56:17 compute-0 systemd-coredump[112194]: Process 108672 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f63229e332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 09:56:17 compute-0 systemd[1]: libpod-2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724.scope: Deactivated successfully.
Jan 23 09:56:17 compute-0 sad_bell[112438]: 167 167
Jan 23 09:56:17 compute-0 conmon[112438]: conmon 2f9c00ef67bbe95dd470 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724.scope/container/memory.events
Jan 23 09:56:17 compute-0 sudo[112494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnzafhkktrtvpcuyteqdgnjtnwlybhxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162176.7159686-165-169704103660062/AnsiballZ_systemd.py'
Jan 23 09:56:17 compute-0 sudo[112494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:17 compute-0 systemd[1]: systemd-coredump@2-112193-0.service: Deactivated successfully.
Jan 23 09:56:17 compute-0 systemd[1]: systemd-coredump@2-112193-0.service: Consumed 1.348s CPU time.
Jan 23 09:56:17 compute-0 podman[112374]: 2026-01-23 09:56:17.543578474 +0000 UTC m=+0.544418359 container attach 2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_bell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:56:17 compute-0 podman[112374]: 2026-01-23 09:56:17.544154841 +0000 UTC m=+0.544994736 container died 2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_bell, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:56:17 compute-0 python3.9[112502]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 09:56:17 compute-0 sudo[112494]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 09:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cbb042fd9b91de93bb11a95b9da1760f0b24d4e056c35e6465fe0e5f31b8c4a-merged.mount: Deactivated successfully.
Jan 23 09:56:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:18.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:18 compute-0 python3.9[112676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:56:18 compute-0 podman[112374]: 2026-01-23 09:56:18.942646013 +0000 UTC m=+1.943485878 container remove 2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:56:18 compute-0 systemd[1]: libpod-conmon-2f9c00ef67bbe95dd470e5438b915f47a8a012b2dd07ba25251979d527b0b724.scope: Deactivated successfully.
Jan 23 09:56:18 compute-0 podman[112510]: 2026-01-23 09:56:18.98579645 +0000 UTC m=+1.465190345 container died e140f2ef7c4c06043c5a4cfd1a05e8a7ae03ba7e17d3e9ae6f49aac4918f0373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 09:56:19 compute-0 sudo[112847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlvwzpoanuaadvyziltuvtskkhacbhdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162179.0378265-219-25702712334035/AnsiballZ_sefcontext.py'
Jan 23 09:56:19 compute-0 sudo[112847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:19 compute-0 ceph-mon[74335]: pgmap v83: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 09:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fe70ddda811a516770f3ecca53ab8fb4293004788661c4eef57e6c5e6671292-merged.mount: Deactivated successfully.
Jan 23 09:56:19 compute-0 python3.9[112849]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 23 09:56:19 compute-0 podman[112510]: 2026-01-23 09:56:19.827768818 +0000 UTC m=+2.307162693 container remove e140f2ef7c4c06043c5a4cfd1a05e8a7ae03ba7e17d3e9ae6f49aac4918f0373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:19 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:56:19
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.nfs', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'backups', '.rgw.root', 'default.rgw.log', 'volumes']
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 09:56:19 compute-0 podman[112735]: 2026-01-23 09:56:19.958501376 +0000 UTC m=+0.894886606 container create 874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hellman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:19] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 23 09:56:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:19] "GET /metrics HTTP/1.1" 200 48408 "" "Prometheus/2.51.0"
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:56:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 09:56:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:56:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:56:20 compute-0 sudo[112847]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:20 compute-0 systemd[1]: Started libpod-conmon-874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f.scope.
Jan 23 09:56:20 compute-0 podman[112735]: 2026-01-23 09:56:19.933062007 +0000 UTC m=+0.869447257 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:56:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4487c9e81b79de380e9c3d381a0c9efa4eafcdae40a4e93b1945b0e33ea99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4487c9e81b79de380e9c3d381a0c9efa4eafcdae40a4e93b1945b0e33ea99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4487c9e81b79de380e9c3d381a0c9efa4eafcdae40a4e93b1945b0e33ea99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4487c9e81b79de380e9c3d381a0c9efa4eafcdae40a4e93b1945b0e33ea99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:56:20 compute-0 podman[112735]: 2026-01-23 09:56:20.066590295 +0000 UTC m=+1.002975545 container init 874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:56:20 compute-0 podman[112735]: 2026-01-23 09:56:20.075162721 +0000 UTC m=+1.011547951 container start 874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 09:56:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:20.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:20 compute-0 podman[112735]: 2026-01-23 09:56:20.080070872 +0000 UTC m=+1.016456122 container attach 874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 09:56:20 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 09:56:20 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.649s CPU time.
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:56:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:56:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:20.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:20 compute-0 lvm[113110]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:56:20 compute-0 lvm[113110]: VG ceph_vg0 finished
Jan 23 09:56:20 compute-0 affectionate_hellman[112884]: {}
Jan 23 09:56:20 compute-0 systemd[1]: libpod-874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f.scope: Deactivated successfully.
Jan 23 09:56:20 compute-0 systemd[1]: libpod-874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f.scope: Consumed 1.244s CPU time.
Jan 23 09:56:20 compute-0 podman[112735]: 2026-01-23 09:56:20.833009078 +0000 UTC m=+1.769394318 container died 874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:56:20 compute-0 python3.9[113100]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:56:21 compute-0 sudo[113281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dptrfvvdjjhrypbuzlldgvwgmlcrqojt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162181.4506807-273-188950530253772/AnsiballZ_dnf.py'
Jan 23 09:56:21 compute-0 sudo[113281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:56:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:21 compute-0 ceph-mon[74335]: pgmap v84: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:56:22 compute-0 python3.9[113283]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:56:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:22.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7d4487c9e81b79de380e9c3d381a0c9efa4eafcdae40a4e93b1945b0e33ea99-merged.mount: Deactivated successfully.
Jan 23 09:56:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095622 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:56:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:22 compute-0 podman[112735]: 2026-01-23 09:56:22.239440309 +0000 UTC m=+3.175825569 container remove 874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 09:56:22 compute-0 sudo[112239]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:22 compute-0 systemd[1]: libpod-conmon-874e8bd6d5e9faeb4e36ff33760e3777d9e85469059363c6a1023a41084bb05f.scope: Deactivated successfully.
Jan 23 09:56:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:56:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:56:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:22 compute-0 sudo[113288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:56:22 compute-0 sudo[113288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:22 compute-0 sudo[113288]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:23 compute-0 ceph-mon[74335]: pgmap v85: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:56:23 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:23 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:56:23 compute-0 sudo[113281]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v86: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:24.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:24 compute-0 sudo[113464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovxntkqyllkzvwgvvqgmcfzhzjhrkfpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162183.8343577-297-122014369787140/AnsiballZ_command.py'
Jan 23 09:56:24 compute-0 sudo[113464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:24 compute-0 python3.9[113466]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:56:25 compute-0 sudo[113464]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:25 compute-0 ceph-mon[74335]: pgmap v86: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:25 compute-0 sudo[113752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luxlvbduzdlgkxsnqyzpuporwackzcex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162185.4951584-321-38076233100467/AnsiballZ_file.py'
Jan 23 09:56:25 compute-0 sudo[113752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:26 compute-0 python3.9[113754]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 23 09:56:26 compute-0 sudo[113752]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:26.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:56:26.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:56:26 compute-0 python3.9[113905]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:56:27 compute-0 ceph-mon[74335]: pgmap v87: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:27 compute-0 sudo[114057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhdqizsrvxrkiymwvsabfupuqeiadift ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162187.2506785-369-223633667921861/AnsiballZ_dnf.py'
Jan 23 09:56:27 compute-0 sudo[114057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:27 compute-0 python3.9[114059]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:56:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:28.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:29 compute-0 ceph-mon[74335]: pgmap v88: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:56:29 compute-0 sudo[114057]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:56:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:29] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 23 09:56:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:29] "GET /metrics HTTP/1.1" 200 48410 "" "Prometheus/2.51.0"
Jan 23 09:56:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:30 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 3.
Jan 23 09:56:30 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:56:30 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.649s CPU time.
Jan 23 09:56:30 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 09:56:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:30.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:30 compute-0 podman[114136]: 2026-01-23 09:56:30.43727727 +0000 UTC m=+0.048324307 container create fd6798f798f784b8073748ebca1512c31bc5cd772166271c0ff2bdccb06fff0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234261bf31c5ff356861e77739030c2e0cfbf1528b7a3958450618a58a05171d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234261bf31c5ff356861e77739030c2e0cfbf1528b7a3958450618a58a05171d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234261bf31c5ff356861e77739030c2e0cfbf1528b7a3958450618a58a05171d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234261bf31c5ff356861e77739030c2e0cfbf1528b7a3958450618a58a05171d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:56:30 compute-0 podman[114136]: 2026-01-23 09:56:30.508878492 +0000 UTC m=+0.119925549 container init fd6798f798f784b8073748ebca1512c31bc5cd772166271c0ff2bdccb06fff0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 09:56:30 compute-0 podman[114136]: 2026-01-23 09:56:30.41601733 +0000 UTC m=+0.027064387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:56:30 compute-0 podman[114136]: 2026-01-23 09:56:30.514289517 +0000 UTC m=+0.125336554 container start fd6798f798f784b8073748ebca1512c31bc5cd772166271c0ff2bdccb06fff0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 09:56:30 compute-0 bash[114136]: fd6798f798f784b8073748ebca1512c31bc5cd772166271c0ff2bdccb06fff0f
Jan 23 09:56:30 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 09:56:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 09:56:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 09:56:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 09:56:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 09:56:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 09:56:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 09:56:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 09:56:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:56:30 compute-0 sudo[114318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqdyjtfjlwzsuyzpoaaflxfeockplar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162190.4569528-396-232208772659363/AnsiballZ_dnf.py'
Jan 23 09:56:30 compute-0 sudo[114318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:31 compute-0 python3.9[114320]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:56:31 compute-0 ceph-mon[74335]: pgmap v89: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:56:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:32.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:32.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:32 compute-0 sudo[114318]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:33 compute-0 ceph-mon[74335]: pgmap v90: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:33 compute-0 sudo[114473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhhkeisjlcdjrqdqqsfcgpzhijuywdmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162193.3696837-432-52568014883190/AnsiballZ_stat.py'
Jan 23 09:56:33 compute-0 sudo[114473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:33 compute-0 python3.9[114475]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:56:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:33 compute-0 sudo[114473]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:34.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:34.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:34 compute-0 sudo[114629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcwuhyoltnnkpsxgtpaikgkqyfmphvfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162194.1283312-456-130755419662269/AnsiballZ_slurp.py'
Jan 23 09:56:34 compute-0 sudo[114629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:34 compute-0 python3.9[114631]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 23 09:56:34 compute-0 sudo[114629]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:56:34 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:56:35 compute-0 ceph-mon[74335]: pgmap v91: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:56:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:35 compute-0 sshd-session[110824]: Connection closed by 192.168.122.30 port 43832
Jan 23 09:56:35 compute-0 sshd-session[110818]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:56:35 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 23 09:56:35 compute-0 systemd[1]: session-40.scope: Consumed 19.861s CPU time.
Jan 23 09:56:35 compute-0 systemd-logind[784]: Session 40 logged out. Waiting for processes to exit.
Jan 23 09:56:35 compute-0 systemd-logind[784]: Removed session 40.
Jan 23 09:56:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:36.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:56:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:56:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:56:36.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:56:37 compute-0 ceph-mon[74335]: pgmap v92: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:37 compute-0 sudo[114658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:56:37 compute-0 sudo[114658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:37 compute-0 sudo[114658]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:56:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:38.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:38.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:39 compute-0 ceph-mon[74335]: pgmap v93: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:56:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:56:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:39] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Jan 23 09:56:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:39] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Jan 23 09:56:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:40.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:40.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:41 compute-0 ceph-mon[74335]: pgmap v94: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:56:41 compute-0 sshd-session[114687]: Accepted publickey for zuul from 192.168.122.30 port 39926 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:56:41 compute-0 systemd-logind[784]: New session 41 of user zuul.
Jan 23 09:56:41 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 23 09:56:41 compute-0 sshd-session[114687]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:56:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:56:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000058s ======
Jan 23 09:56:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:42.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Jan 23 09:56:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:42.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:42 compute-0 python3.9[114842]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 09:56:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 09:56:43 compute-0 ceph-mon[74335]: pgmap v95: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:56:43 compute-0 python3.9[115007]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:56:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:43 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095643 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:56:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 09:56:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:43 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:44.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:44.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:44 compute-0 ceph-mon[74335]: pgmap v96: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 09:56:44 compute-0 python3.9[115206]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:56:45 compute-0 sshd-session[114690]: Connection closed by 192.168.122.30 port 39926
Jan 23 09:56:45 compute-0 sshd-session[114687]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:56:45 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 23 09:56:45 compute-0 systemd[1]: session-41.scope: Consumed 2.511s CPU time.
Jan 23 09:56:45 compute-0 systemd-logind[784]: Session 41 logged out. Waiting for processes to exit.
Jan 23 09:56:45 compute-0 systemd-logind[784]: Removed session 41.
Jan 23 09:56:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:45 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 09:56:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:45 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:56:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:46.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:56:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095646 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:56:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:46.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:46 compute-0 sshd-session[115234]: Invalid user  from 194.187.176.241 port 23048
Jan 23 09:56:46 compute-0 sshd-session[115234]: Connection closed by invalid user  194.187.176.241 port 23048 [preauth]
Jan 23 09:56:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:46 compute-0 ceph-mon[74335]: pgmap v97: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 09:56:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:56:46.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:56:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:47 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 09:56:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:47 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:48.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840012e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:48.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:49 compute-0 ceph-mon[74335]: pgmap v98: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 09:56:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:49 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 09:56:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:49] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Jan 23 09:56:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:49] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Jan 23 09:56:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:49 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:56:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:56:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:56:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:56:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:56:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:56:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:56:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:56:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:50.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:50.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:56:50 compute-0 sshd-session[115240]: Accepted publickey for zuul from 192.168.122.30 port 58198 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:56:50 compute-0 systemd-logind[784]: New session 42 of user zuul.
Jan 23 09:56:50 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 23 09:56:50 compute-0 sshd-session[115240]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:56:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095651 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:56:51 compute-0 ceph-mon[74335]: pgmap v99: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 09:56:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:51 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84001e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:51 compute-0 python3.9[115393]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:56:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Jan 23 09:56:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:51 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:52.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:52.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:52 compute-0 python3.9[115549]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:56:53 compute-0 ceph-mon[74335]: pgmap v100: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Jan 23 09:56:53 compute-0 sudo[115703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxdbcxsuueglqwixubhhksjgcvtswydg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162213.12562-75-47409656928546/AnsiballZ_setup.py'
Jan 23 09:56:53 compute-0 sudo[115703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:53 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:53 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:56:53 compute-0 python3.9[115705]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:56:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:53 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:54 compute-0 sudo[115703]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:54.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:54.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:54 compute-0 sudo[115789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkmigxjixiujkuntoelprmwtyetnwpqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162213.12562-75-47409656928546/AnsiballZ_dnf.py'
Jan 23 09:56:54 compute-0 sudo[115789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:54 compute-0 ceph-mon[74335]: pgmap v101: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:54 compute-0 python3.9[115791]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:56:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:55 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:55 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:56.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:56 compute-0 sudo[115789]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:56.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:56 compute-0 sudo[115944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaibzgcumdpvgsomvqugfyyufdabapaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162216.3881612-111-114686083823497/AnsiballZ_setup.py'
Jan 23 09:56:56 compute-0 sudo[115944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:56:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:56:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:56:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:56:56.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:56:57 compute-0 python3.9[115946]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:56:57 compute-0 sudo[115954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:56:57 compute-0 sudo[115954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:56:57 compute-0 sudo[115954]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:57 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:56:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:57 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:56:57 compute-0 sudo[115944]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:57 compute-0 ceph-mon[74335]: pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:56:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:57 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Jan 23 09:56:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:57 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:56:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:58.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:56:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:56:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:56:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:58.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:56:58 compute-0 sudo[116166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntanrbygnfyaoujuuribrojjlbvncnvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162217.8619838-144-212845702813888/AnsiballZ_file.py'
Jan 23 09:56:58 compute-0 sudo[116166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:58 compute-0 ceph-mon[74335]: pgmap v103: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Jan 23 09:56:59 compute-0 python3.9[116168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:56:59 compute-0 sudo[116166]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:59 compute-0 sudo[116318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsmupkhdyvszefanamrtofatwwjbfiqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162219.2121136-168-152186331429296/AnsiballZ_command.py'
Jan 23 09:56:59 compute-0 sudo[116318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:56:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:59 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:56:59 compute-0 python3.9[116320]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:56:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 23 09:56:59 compute-0 sudo[116318]: pam_unix(sudo:session): session closed for user root
Jan 23 09:56:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:59] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Jan 23 09:56:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:56:59] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Jan 23 09:56:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:56:59 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:57:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:00.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:57:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:57:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:00.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:57:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 09:57:00 compute-0 sudo[116485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlbeyuljhuzpaiedozbsizlifwmmbpfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162220.1734293-192-73459267952329/AnsiballZ_stat.py'
Jan 23 09:57:00 compute-0 sudo[116485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:01 compute-0 python3.9[116487]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:01 compute-0 sudo[116485]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:01 compute-0 sudo[116563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oavnefcnrnityjihaoiliguhelaqbbca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162220.1734293-192-73459267952329/AnsiballZ_file.py'
Jan 23 09:57:01 compute-0 sudo[116563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:01 compute-0 python3.9[116565]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:01 compute-0 sudo[116563]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:01 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:01 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:57:01 compute-0 ceph-mon[74335]: pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 23 09:57:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 767 B/s wr, 3 op/s
Jan 23 09:57:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:01 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:02 compute-0 sudo[116717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikkkdmvyktjvirfszcaawwhinnorzrwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162221.7838776-228-149636708736644/AnsiballZ_stat.py'
Jan 23 09:57:02 compute-0 sudo[116717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:02.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:02.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:02 compute-0 python3.9[116719]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:02 compute-0 sudo[116717]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:02 compute-0 sudo[116795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnszxzbrqpdofjmpnbrmyxewrxrdylyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162221.7838776-228-149636708736644/AnsiballZ_file.py'
Jan 23 09:57:02 compute-0 sudo[116795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:02 compute-0 python3.9[116797]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:57:02 compute-0 sudo[116795]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:02 compute-0 ceph-mon[74335]: pgmap v105: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 767 B/s wr, 3 op/s
Jan 23 09:57:03 compute-0 sudo[116947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqdczyrrnguusqfjkzvaxwvgzkqywysw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162222.9784055-267-63129362521464/AnsiballZ_ini_file.py'
Jan 23 09:57:03 compute-0 sudo[116947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:03 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:03 compute-0 python3.9[116949]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:57:03 compute-0 sudo[116947]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 09:57:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840028c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:04 compute-0 sudo[117101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iegclxzrvhliyyxvabizgsegjgqvhhxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162223.8665729-267-150034694142380/AnsiballZ_ini_file.py'
Jan 23 09:57:04 compute-0 sudo[117101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:57:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:04.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:57:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:04.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:04 compute-0 python3.9[117103]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:57:04 compute-0 sudo[117101]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:57:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:57:04 compute-0 sudo[117253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvjczphcrfjoiseedlyqzcbxscjnmrom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162224.5180635-267-30200526253643/AnsiballZ_ini_file.py'
Jan 23 09:57:04 compute-0 sudo[117253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:57:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:57:04 compute-0 python3.9[117255]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:57:04 compute-0 sudo[117253]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:57:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:57:05 compute-0 ceph-mon[74335]: pgmap v106: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 09:57:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:57:05 compute-0 sudo[117405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sppssbsdomhgcesvyamgfaivlejwdbff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162225.1165774-267-176025010063317/AnsiballZ_ini_file.py'
Jan 23 09:57:05 compute-0 sudo[117405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:05 compute-0 python3.9[117407]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:57:05 compute-0 sudo[117405]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:05 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 09:57:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:06.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:57:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:06.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:57:06 compute-0 ceph-mon[74335]: pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 09:57:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:57:06.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:57:07 compute-0 sudo[117559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnkfkgcynfuvbziihugamlakyfpojepu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162227.1051674-360-228732238726987/AnsiballZ_dnf.py'
Jan 23 09:57:07 compute-0 sudo[117559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:07 compute-0 python3.9[117561]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:57:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:07 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095707 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:57:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:07 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 09:57:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 23 09:57:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:08.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:09 compute-0 ceph-mon[74335]: pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 23 09:57:09 compute-0 sudo[117559]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:09 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 23 09:57:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:09] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 09:57:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:09] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 09:57:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:10 compute-0 sudo[117717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwbfjyjifycsuvvghmncygwqhlqxxlif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162229.862965-393-94831330963616/AnsiballZ_setup.py'
Jan 23 09:57:10 compute-0 sudo[117717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:57:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:10.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:57:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:10 compute-0 python3.9[117719]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:57:10 compute-0 sudo[117717]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:10 compute-0 sudo[117871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjqimseipotavequomavbwcthigwksqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162230.6955755-417-241241033362320/AnsiballZ_stat.py'
Jan 23 09:57:10 compute-0 sudo[117871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:11 compute-0 ceph-mon[74335]: pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 23 09:57:11 compute-0 python3.9[117873]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:57:11 compute-0 sudo[117871]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:11 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:11 compute-0 sudo[118024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pumvryxgqldjbjjrpbmyjpzutbngqxpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162231.4737933-444-248758829587366/AnsiballZ_stat.py'
Jan 23 09:57:11 compute-0 sudo[118024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 23 09:57:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:11 compute-0 python3.9[118026]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:57:11 compute-0 sudo[118024]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:57:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:12.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:57:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:57:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:12.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:57:12 compute-0 sudo[118177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksqrvyykzlcxnoujemwsqxkspodlgeru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162232.295629-474-237188522556070/AnsiballZ_command.py'
Jan 23 09:57:12 compute-0 sudo[118177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:12 compute-0 python3.9[118179]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:57:12 compute-0 sudo[118177]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095713 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:57:13 compute-0 ceph-mon[74335]: pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 23 09:57:13 compute-0 sudo[118330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snvubpjlxhkopijbwinuvtrashhocnpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162233.1721394-504-153621888987658/AnsiballZ_service_facts.py'
Jan 23 09:57:13 compute-0 sudo[118330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:13 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:13 compute-0 python3.9[118332]: ansible-service_facts Invoked
Jan 23 09:57:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:57:13 compute-0 network[118350]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 09:57:13 compute-0 network[118351]: 'network-scripts' will be removed from distribution in near future.
Jan 23 09:57:13 compute-0 network[118352]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 09:57:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:57:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:14.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:57:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:14.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:15 compute-0 ceph-mon[74335]: pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:57:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:15 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:57:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:16.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:16.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:16 compute-0 sudo[118330]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:57:16.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:57:17 compute-0 sudo[118493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:57:17 compute-0 sudo[118493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:17 compute-0 sudo[118493]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:17 compute-0 ceph-mon[74335]: pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:57:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:17 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:57:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:57:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:18.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:57:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:57:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:18.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:57:18 compute-0 ceph-mon[74335]: pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:57:19 compute-0 sudo[118668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zthmuzirbkvuxkabmdjcthwjozhcwhkw ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769162239.3006716-549-207128209433987/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769162239.3006716-549-207128209433987/args'
Jan 23 09:57:19 compute-0 sudo[118668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:19 compute-0 sudo[118668]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:19 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Jan 23 09:57:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:57:19
Jan 23 09:57:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:57:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 09:57:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', '.nfs', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'images']
Jan 23 09:57:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 09:57:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:19] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 09:57:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:19] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 09:57:19 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 09:57:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:57:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:57:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:57:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:57:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:57:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:20.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:57:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:57:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:20.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:57:20 compute-0 sudo[118837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpmnojzvcfdcnefxzerugggtxswubxxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162240.0625534-582-227781500116812/AnsiballZ_dnf.py'
Jan 23 09:57:20 compute-0 sudo[118837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:20 compute-0 python3.9[118839]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 09:57:20 compute-0 ceph-mon[74335]: pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Jan 23 09:57:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:57:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:21 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Jan 23 09:57:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:22 compute-0 sudo[118837]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:57:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:22.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:57:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:57:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:22.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:57:22 compute-0 sudo[118867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:57:22 compute-0 sudo[118867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:22 compute-0 sudo[118867]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:22 compute-0 sudo[118892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 09:57:22 compute-0 sudo[118892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:23 compute-0 sudo[118892]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:23 compute-0 ceph-mon[74335]: pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Jan 23 09:57:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:57:23 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:57:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:57:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:57:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:57:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:57:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:57:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:57:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 09:57:23 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:57:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:57:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:57:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:57:23 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:57:23 compute-0 sudo[119046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:57:23 compute-0 sudo[119046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:23 compute-0 sudo[119046]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:23 compute-0 sudo[119103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykjrsnxjvtqgfgdagkzvoyuvfeeythkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162242.8487945-621-175633015990967/AnsiballZ_package_facts.py'
Jan 23 09:57:23 compute-0 sudo[119103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:23 compute-0 sudo[119096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 09:57:23 compute-0 sudo[119096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:23 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:23 compute-0 python3.9[119121]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 23 09:57:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:23 compute-0 podman[119167]: 2026-01-23 09:57:23.923733286 +0000 UTC m=+0.047947267 container create a9bd64d0319fa266d4afca71f7a7a4f9635e41de4208c66134fcd95f17eeed22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 23 09:57:23 compute-0 systemd[90024]: Created slice User Background Tasks Slice.
Jan 23 09:57:23 compute-0 systemd[90024]: Starting Cleanup of User's Temporary Files and Directories...
Jan 23 09:57:23 compute-0 systemd[1]: Started libpod-conmon-a9bd64d0319fa266d4afca71f7a7a4f9635e41de4208c66134fcd95f17eeed22.scope.
Jan 23 09:57:23 compute-0 systemd[90024]: Finished Cleanup of User's Temporary Files and Directories.
Jan 23 09:57:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:57:23 compute-0 podman[119167]: 2026-01-23 09:57:23.900614931 +0000 UTC m=+0.024828912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:57:24 compute-0 podman[119167]: 2026-01-23 09:57:23.999982509 +0000 UTC m=+0.124196520 container init a9bd64d0319fa266d4afca71f7a7a4f9635e41de4208c66134fcd95f17eeed22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:57:24 compute-0 podman[119167]: 2026-01-23 09:57:24.009066334 +0000 UTC m=+0.133280305 container start a9bd64d0319fa266d4afca71f7a7a4f9635e41de4208c66134fcd95f17eeed22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 09:57:24 compute-0 podman[119167]: 2026-01-23 09:57:24.01344455 +0000 UTC m=+0.137658551 container attach a9bd64d0319fa266d4afca71f7a7a4f9635e41de4208c66134fcd95f17eeed22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mendel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 09:57:24 compute-0 cool_mendel[119184]: 167 167
Jan 23 09:57:24 compute-0 systemd[1]: libpod-a9bd64d0319fa266d4afca71f7a7a4f9635e41de4208c66134fcd95f17eeed22.scope: Deactivated successfully.
Jan 23 09:57:24 compute-0 podman[119167]: 2026-01-23 09:57:24.016924197 +0000 UTC m=+0.141138188 container died a9bd64d0319fa266d4afca71f7a7a4f9635e41de4208c66134fcd95f17eeed22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mendel, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:57:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-de23ff37f1e4dfba40c4e722a127deaa65cb929c14aa414daddbebac651ad872-merged.mount: Deactivated successfully.
Jan 23 09:57:24 compute-0 podman[119167]: 2026-01-23 09:57:24.063917531 +0000 UTC m=+0.188131512 container remove a9bd64d0319fa266d4afca71f7a7a4f9635e41de4208c66134fcd95f17eeed22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:57:24 compute-0 systemd[1]: libpod-conmon-a9bd64d0319fa266d4afca71f7a7a4f9635e41de4208c66134fcd95f17eeed22.scope: Deactivated successfully.
Jan 23 09:57:24 compute-0 sudo[119103]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:24.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:24 compute-0 podman[119215]: 2026-01-23 09:57:24.233392616 +0000 UTC m=+0.050652197 container create edf2f9bc3e521711212897b5c117523cc646e6aa4a63c48c94efd49bc29fbeb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:57:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:57:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:57:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:57:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:57:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:57:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:57:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:57:24 compute-0 systemd[1]: Started libpod-conmon-edf2f9bc3e521711212897b5c117523cc646e6aa4a63c48c94efd49bc29fbeb3.scope.
Jan 23 09:57:24 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a155239ce668da1a16e6c6584e9edda27c878eeb406fc8c4ff6534085cb3f75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a155239ce668da1a16e6c6584e9edda27c878eeb406fc8c4ff6534085cb3f75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a155239ce668da1a16e6c6584e9edda27c878eeb406fc8c4ff6534085cb3f75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a155239ce668da1a16e6c6584e9edda27c878eeb406fc8c4ff6534085cb3f75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a155239ce668da1a16e6c6584e9edda27c878eeb406fc8c4ff6534085cb3f75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:24 compute-0 podman[119215]: 2026-01-23 09:57:24.212157715 +0000 UTC m=+0.029417316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:57:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:24.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:24 compute-0 podman[119215]: 2026-01-23 09:57:24.310670754 +0000 UTC m=+0.127930345 container init edf2f9bc3e521711212897b5c117523cc646e6aa4a63c48c94efd49bc29fbeb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:57:24 compute-0 podman[119215]: 2026-01-23 09:57:24.319926624 +0000 UTC m=+0.137186195 container start edf2f9bc3e521711212897b5c117523cc646e6aa4a63c48c94efd49bc29fbeb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 09:57:24 compute-0 podman[119215]: 2026-01-23 09:57:24.324463396 +0000 UTC m=+0.141722997 container attach edf2f9bc3e521711212897b5c117523cc646e6aa4a63c48c94efd49bc29fbeb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 09:57:24 compute-0 dreamy_torvalds[119248]: --> passed data devices: 0 physical, 1 LVM
Jan 23 09:57:24 compute-0 dreamy_torvalds[119248]: --> All data devices are unavailable
Jan 23 09:57:24 compute-0 systemd[1]: libpod-edf2f9bc3e521711212897b5c117523cc646e6aa4a63c48c94efd49bc29fbeb3.scope: Deactivated successfully.
Jan 23 09:57:24 compute-0 podman[119215]: 2026-01-23 09:57:24.700877492 +0000 UTC m=+0.518137083 container died edf2f9bc3e521711212897b5c117523cc646e6aa4a63c48c94efd49bc29fbeb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 09:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a155239ce668da1a16e6c6584e9edda27c878eeb406fc8c4ff6534085cb3f75-merged.mount: Deactivated successfully.
Jan 23 09:57:24 compute-0 podman[119215]: 2026-01-23 09:57:24.756130683 +0000 UTC m=+0.573390254 container remove edf2f9bc3e521711212897b5c117523cc646e6aa4a63c48c94efd49bc29fbeb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:57:24 compute-0 systemd[1]: libpod-conmon-edf2f9bc3e521711212897b5c117523cc646e6aa4a63c48c94efd49bc29fbeb3.scope: Deactivated successfully.
Jan 23 09:57:24 compute-0 sudo[119096]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:24 compute-0 sudo[119303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:57:24 compute-0 sudo[119303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:24 compute-0 sudo[119303]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:24 compute-0 sudo[119352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 09:57:24 compute-0 sudo[119352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:25 compute-0 sudo[119450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnyefxnbdtbxlhpltmvmtjxrwjwboozd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162244.7871823-651-266209388612242/AnsiballZ_stat.py'
Jan 23 09:57:25 compute-0 sudo[119450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:25 compute-0 python3.9[119452]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:25 compute-0 sudo[119450]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:25 compute-0 podman[119494]: 2026-01-23 09:57:25.377578845 +0000 UTC m=+0.049452377 container create e90ec21e5b8c56876cc5c1b67c06f52730aa56fea9c97c44697f6e91302e323d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_archimedes, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 09:57:25 compute-0 ceph-mon[74335]: pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:25 compute-0 podman[119494]: 2026-01-23 09:57:25.356626683 +0000 UTC m=+0.028500245 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:57:25 compute-0 systemd[1]: Started libpod-conmon-e90ec21e5b8c56876cc5c1b67c06f52730aa56fea9c97c44697f6e91302e323d.scope.
Jan 23 09:57:25 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:57:25 compute-0 podman[119494]: 2026-01-23 09:57:25.499329332 +0000 UTC m=+0.171202884 container init e90ec21e5b8c56876cc5c1b67c06f52730aa56fea9c97c44697f6e91302e323d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_archimedes, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:57:25 compute-0 podman[119494]: 2026-01-23 09:57:25.506859465 +0000 UTC m=+0.178732997 container start e90ec21e5b8c56876cc5c1b67c06f52730aa56fea9c97c44697f6e91302e323d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 09:57:25 compute-0 modest_archimedes[119534]: 167 167
Jan 23 09:57:25 compute-0 systemd[1]: libpod-e90ec21e5b8c56876cc5c1b67c06f52730aa56fea9c97c44697f6e91302e323d.scope: Deactivated successfully.
Jan 23 09:57:25 compute-0 sudo[119598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajfnkddchhxuwprvgemvnahhtwonogkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162244.7871823-651-266209388612242/AnsiballZ_file.py'
Jan 23 09:57:25 compute-0 sudo[119598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:25 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:25 compute-0 podman[119494]: 2026-01-23 09:57:25.760998845 +0000 UTC m=+0.432872397 container attach e90ec21e5b8c56876cc5c1b67c06f52730aa56fea9c97c44697f6e91302e323d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 09:57:25 compute-0 podman[119494]: 2026-01-23 09:57:25.761707359 +0000 UTC m=+0.433580911 container died e90ec21e5b8c56876cc5c1b67c06f52730aa56fea9c97c44697f6e91302e323d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_archimedes, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 09:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-524fada9655b470a285ee07de1fa56f3ec8b26c8ff08064a96ef855c866b3986-merged.mount: Deactivated successfully.
Jan 23 09:57:25 compute-0 podman[119494]: 2026-01-23 09:57:25.822985831 +0000 UTC m=+0.494859363 container remove e90ec21e5b8c56876cc5c1b67c06f52730aa56fea9c97c44697f6e91302e323d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:57:25 compute-0 systemd[1]: libpod-conmon-e90ec21e5b8c56876cc5c1b67c06f52730aa56fea9c97c44697f6e91302e323d.scope: Deactivated successfully.
Jan 23 09:57:25 compute-0 python3.9[119600]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:25 compute-0 sudo[119598]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:26 compute-0 podman[119631]: 2026-01-23 09:57:26.015018413 +0000 UTC m=+0.053072789 container create ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 09:57:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:26 compute-0 systemd[1]: Started libpod-conmon-ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d.scope.
Jan 23 09:57:26 compute-0 podman[119631]: 2026-01-23 09:57:25.989906392 +0000 UTC m=+0.027960788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:57:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a0ebaa4c6fdaa0be93f0b11c9b836ebfb6da12e98831758249793bc1c49bfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a0ebaa4c6fdaa0be93f0b11c9b836ebfb6da12e98831758249793bc1c49bfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a0ebaa4c6fdaa0be93f0b11c9b836ebfb6da12e98831758249793bc1c49bfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a0ebaa4c6fdaa0be93f0b11c9b836ebfb6da12e98831758249793bc1c49bfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:26 compute-0 podman[119631]: 2026-01-23 09:57:26.147602713 +0000 UTC m=+0.185657119 container init ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 09:57:26 compute-0 podman[119631]: 2026-01-23 09:57:26.157176924 +0000 UTC m=+0.195231300 container start ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:57:26 compute-0 podman[119631]: 2026-01-23 09:57:26.163019149 +0000 UTC m=+0.201073535 container attach ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_neumann, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 09:57:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:26.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:57:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:26.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:57:26 compute-0 sudo[119784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtsztsstgvjvhsczqejxoiyupkyztbxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162246.1009161-687-142216003056719/AnsiballZ_stat.py'
Jan 23 09:57:26 compute-0 sudo[119784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:26 compute-0 epic_neumann[119670]: {
Jan 23 09:57:26 compute-0 epic_neumann[119670]:     "1": [
Jan 23 09:57:26 compute-0 epic_neumann[119670]:         {
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "devices": [
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "/dev/loop3"
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             ],
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "lv_name": "ceph_lv0",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "lv_size": "21470642176",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "name": "ceph_lv0",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "tags": {
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.cluster_name": "ceph",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.crush_device_class": "",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.encrypted": "0",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.osd_id": "1",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.type": "block",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.vdo": "0",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:                 "ceph.with_tpm": "0"
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             },
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "type": "block",
Jan 23 09:57:26 compute-0 epic_neumann[119670]:             "vg_name": "ceph_vg0"
Jan 23 09:57:26 compute-0 epic_neumann[119670]:         }
Jan 23 09:57:26 compute-0 epic_neumann[119670]:     ]
Jan 23 09:57:26 compute-0 epic_neumann[119670]: }
Jan 23 09:57:26 compute-0 systemd[1]: libpod-ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d.scope: Deactivated successfully.
Jan 23 09:57:26 compute-0 conmon[119670]: conmon ebeda9a2f65c21d5792f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d.scope/container/memory.events
Jan 23 09:57:26 compute-0 podman[119631]: 2026-01-23 09:57:26.518217385 +0000 UTC m=+0.556271771 container died ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_neumann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:57:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-66a0ebaa4c6fdaa0be93f0b11c9b836ebfb6da12e98831758249793bc1c49bfd-merged.mount: Deactivated successfully.
Jan 23 09:57:26 compute-0 podman[119631]: 2026-01-23 09:57:26.565197798 +0000 UTC m=+0.603252174 container remove ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:57:26 compute-0 systemd[1]: libpod-conmon-ebeda9a2f65c21d5792fcb8f08ad4c5539accc34c97a21d2e9f8c1db8fc1422d.scope: Deactivated successfully.
Jan 23 09:57:26 compute-0 sudo[119352]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:26 compute-0 python3.9[119788]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:26 compute-0 sudo[119801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:57:26 compute-0 sudo[119801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:26 compute-0 sudo[119801]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:26 compute-0 sudo[119784]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:26 compute-0 sudo[119828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 09:57:26 compute-0 sudo[119828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:26 compute-0 sudo[119926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmrfppthrtmlrbauwyjwhdmizhzqyggn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162246.1009161-687-142216003056719/AnsiballZ_file.py'
Jan 23 09:57:26 compute-0 sudo[119926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:57:26.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 09:57:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:57:26.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 09:57:27 compute-0 python3.9[119928]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:27 compute-0 sudo[119926]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:27 compute-0 podman[119968]: 2026-01-23 09:57:27.205889014 +0000 UTC m=+0.078845272 container create ccef0e9391942990b5ac4f5b23ff366a10378f3f8d2224b97935cc97bbafaaea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:57:27 compute-0 podman[119968]: 2026-01-23 09:57:27.151232634 +0000 UTC m=+0.024188922 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:57:27 compute-0 systemd[1]: Started libpod-conmon-ccef0e9391942990b5ac4f5b23ff366a10378f3f8d2224b97935cc97bbafaaea.scope.
Jan 23 09:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:57:27 compute-0 podman[119968]: 2026-01-23 09:57:27.430048021 +0000 UTC m=+0.303004299 container init ccef0e9391942990b5ac4f5b23ff366a10378f3f8d2224b97935cc97bbafaaea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_swirles, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 09:57:27 compute-0 ceph-mon[74335]: pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:27 compute-0 podman[119968]: 2026-01-23 09:57:27.437760389 +0000 UTC m=+0.310716647 container start ccef0e9391942990b5ac4f5b23ff366a10378f3f8d2224b97935cc97bbafaaea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 09:57:27 compute-0 podman[119968]: 2026-01-23 09:57:27.442185648 +0000 UTC m=+0.315141926 container attach ccef0e9391942990b5ac4f5b23ff366a10378f3f8d2224b97935cc97bbafaaea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_swirles, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 09:57:27 compute-0 festive_swirles[120009]: 167 167
Jan 23 09:57:27 compute-0 systemd[1]: libpod-ccef0e9391942990b5ac4f5b23ff366a10378f3f8d2224b97935cc97bbafaaea.scope: Deactivated successfully.
Jan 23 09:57:27 compute-0 podman[119968]: 2026-01-23 09:57:27.443259724 +0000 UTC m=+0.316216002 container died ccef0e9391942990b5ac4f5b23ff366a10378f3f8d2224b97935cc97bbafaaea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_swirles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d3c4dc650b70bf315792cca758ef4a4e50635ec99d4c4d343a9514869711dc6-merged.mount: Deactivated successfully.
Jan 23 09:57:27 compute-0 podman[119968]: 2026-01-23 09:57:27.527476414 +0000 UTC m=+0.400432662 container remove ccef0e9391942990b5ac4f5b23ff366a10378f3f8d2224b97935cc97bbafaaea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 09:57:27 compute-0 systemd[1]: libpod-conmon-ccef0e9391942990b5ac4f5b23ff366a10378f3f8d2224b97935cc97bbafaaea.scope: Deactivated successfully.
Jan 23 09:57:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:27 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:27 compute-0 podman[120034]: 2026-01-23 09:57:27.676698271 +0000 UTC m=+0.026461137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:57:27 compute-0 podman[120034]: 2026-01-23 09:57:27.792885243 +0000 UTC m=+0.142648089 container create 473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_albattani, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 09:57:27 compute-0 systemd[1]: Started libpod-conmon-473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f.scope.
Jan 23 09:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9628fe7151f7832f501c324792fe8a759249ad0b8f3cf450d1003077d6d33be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9628fe7151f7832f501c324792fe8a759249ad0b8f3cf450d1003077d6d33be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9628fe7151f7832f501c324792fe8a759249ad0b8f3cf450d1003077d6d33be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9628fe7151f7832f501c324792fe8a759249ad0b8f3cf450d1003077d6d33be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:57:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:27 compute-0 podman[120034]: 2026-01-23 09:57:27.884093977 +0000 UTC m=+0.233856853 container init 473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_albattani, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 09:57:27 compute-0 podman[120034]: 2026-01-23 09:57:27.894506146 +0000 UTC m=+0.244268992 container start 473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_albattani, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 23 09:57:27 compute-0 podman[120034]: 2026-01-23 09:57:27.902201634 +0000 UTC m=+0.251964610 container attach 473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_albattani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:57:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:57:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:28.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:57:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:28.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:28 compute-0 ceph-mon[74335]: pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:28 compute-0 sudo[120250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbhuyzfpeutnwkxeapqwyvfjnirsvcuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162248.1142333-741-123956472149451/AnsiballZ_lineinfile.py'
Jan 23 09:57:28 compute-0 sudo[120250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:28 compute-0 lvm[120253]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:57:28 compute-0 lvm[120253]: VG ceph_vg0 finished
Jan 23 09:57:28 compute-0 suspicious_albattani[120051]: {}
Jan 23 09:57:28 compute-0 systemd[1]: libpod-473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f.scope: Deactivated successfully.
Jan 23 09:57:28 compute-0 systemd[1]: libpod-473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f.scope: Consumed 1.316s CPU time.
Jan 23 09:57:28 compute-0 podman[120034]: 2026-01-23 09:57:28.740122226 +0000 UTC m=+1.089885092 container died 473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_albattani, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 09:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9628fe7151f7832f501c324792fe8a759249ad0b8f3cf450d1003077d6d33be-merged.mount: Deactivated successfully.
Jan 23 09:57:28 compute-0 podman[120034]: 2026-01-23 09:57:28.812850421 +0000 UTC m=+1.162613287 container remove 473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:57:28 compute-0 systemd[1]: libpod-conmon-473e1d43b87c527c42e365dbaf833c6c1fa4e00d3385913c1cf8f491c2e9ed7f.scope: Deactivated successfully.
Jan 23 09:57:28 compute-0 python3.9[120254]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:28 compute-0 sudo[119828]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:57:28 compute-0 sudo[120250]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:57:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:57:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:57:28 compute-0 sudo[120279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:57:28 compute-0 sudo[120279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:28 compute-0 sudo[120279]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:29 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:29] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:57:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:29] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:57:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:30 compute-0 sudo[120448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnkchpujmaoqykbefadzkyomzjkxmwko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162249.893878-786-141639868323452/AnsiballZ_setup.py'
Jan 23 09:57:30 compute-0 sudo[120448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:30.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:57:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:57:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:30 compute-0 python3.9[120450]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:57:30 compute-0 sudo[120448]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:31 compute-0 sudo[120532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ualmnpunkrbyuonyfihrcosvevuoyjrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162249.893878-786-141639868323452/AnsiballZ_systemd.py'
Jan 23 09:57:31 compute-0 sudo[120532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:31 compute-0 ceph-mon[74335]: pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:31 compute-0 python3.9[120534]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:57:31 compute-0 sudo[120532]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:31 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:57:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000067s ======
Jan 23 09:57:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:32.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000067s
Jan 23 09:57:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:32.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:32 compute-0 sshd-session[115243]: Connection closed by 192.168.122.30 port 58198
Jan 23 09:57:32 compute-0 sshd-session[115240]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:57:32 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 23 09:57:32 compute-0 systemd[1]: session-42.scope: Consumed 25.839s CPU time.
Jan 23 09:57:32 compute-0 systemd-logind[784]: Session 42 logged out. Waiting for processes to exit.
Jan 23 09:57:32 compute-0 systemd-logind[784]: Removed session 42.
Jan 23 09:57:32 compute-0 ceph-mon[74335]: pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:57:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:33 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:34.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:34.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:34 compute-0 ceph-mon[74335]: pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:57:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:57:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:35 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:57:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:36.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:36 compute-0 ceph-mon[74335]: pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:57:36.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:57:37 compute-0 sudo[120567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:57:37 compute-0 sudo[120567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:37 compute-0 sudo[120567]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:37 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:38 compute-0 sshd-session[120594]: Accepted publickey for zuul from 192.168.122.30 port 60356 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:57:38 compute-0 systemd-logind[784]: New session 43 of user zuul.
Jan 23 09:57:38 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 23 09:57:38 compute-0 sshd-session[120594]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:57:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:57:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:38.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:57:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:57:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:38.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:57:38 compute-0 sudo[120747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yopstkksdmdbzxrdufomwqvzzgqdfcof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162258.2872295-21-40813349259242/AnsiballZ_file.py'
Jan 23 09:57:38 compute-0 sudo[120747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:38 compute-0 python3.9[120749]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:38 compute-0 ceph-mon[74335]: pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:39 compute-0 sudo[120747]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:39 compute-0 sudo[120900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqwdhegjohaxjyuevsiyvvwwvyguqdfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162259.227241-57-106046776055038/AnsiballZ_stat.py'
Jan 23 09:57:39 compute-0 sudo[120900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:39 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:39 compute-0 python3.9[120902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:39 compute-0 sudo[120900]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:39] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:57:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:39] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:57:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:40 compute-0 sudo[120979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvhvmqjdlofayvwivjomxpetwhygethu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162259.227241-57-106046776055038/AnsiballZ_file.py'
Jan 23 09:57:40 compute-0 sudo[120979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:40.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:40 compute-0 python3.9[120981]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:40 compute-0 sudo[120979]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:40 compute-0 sshd-session[120597]: Connection closed by 192.168.122.30 port 60356
Jan 23 09:57:40 compute-0 sshd-session[120594]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:57:40 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 23 09:57:40 compute-0 systemd[1]: session-43.scope: Consumed 1.687s CPU time.
Jan 23 09:57:40 compute-0 systemd-logind[784]: Session 43 logged out. Waiting for processes to exit.
Jan 23 09:57:40 compute-0 systemd-logind[784]: Removed session 43.
Jan 23 09:57:41 compute-0 ceph-mon[74335]: pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:41 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:57:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 09:57:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2483 writes, 11K keys, 2483 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2482 writes, 2482 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2483 writes, 11K keys, 2483 commit groups, 1.0 writes per commit group, ingest: 21.86 MB, 0.04 MB/s
                                           Interval WAL: 2482 writes, 2482 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     44.8      0.38              0.24         4    0.096       0      0       0.0       0.0
                                             L6      1/0   13.14 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     81.3     73.2      0.49              0.12         3    0.164     12K   1355       0.0       0.0
                                            Sum      1/0   13.14 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.1     45.7     60.8      0.87              0.36         7    0.125     12K   1355       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.1     45.9     61.0      0.87              0.36         6    0.145     12K   1355       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     81.3     73.2      0.49              0.12         3    0.164     12K   1355       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     45.1      0.38              0.24         3    0.126       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.017, interval 0.017
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.09 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.9 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5569ddb77350#2 capacity: 304.00 MB usage: 1.04 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(69,929.38 KB,0.29855%) FilterBlock(8,46.67 KB,0.0149928%) IndexBlock(8,93.09 KB,0.0299052%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 09:57:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:57:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:42.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:57:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:57:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:42.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:57:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:43 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78002320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:57:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:44.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:57:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:44.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:45 compute-0 ceph-mon[74335]: pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:57:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:45 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003ce0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78002320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:46.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:46.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:46 compute-0 ceph-mon[74335]: pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:46 compute-0 ceph-mon[74335]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:57:46.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:57:47 compute-0 sshd-session[121012]: Accepted publickey for zuul from 192.168.122.30 port 58418 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:57:47 compute-0 systemd-logind[784]: New session 44 of user zuul.
Jan 23 09:57:47 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 23 09:57:47 compute-0 sshd-session[121012]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:57:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:47 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:48.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:57:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:48.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:57:48 compute-0 python3.9[121168]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:57:49 compute-0 ceph-mon[74335]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:49 compute-0 sudo[121322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjcdozbvtoyrcxyjljpfubqeyvroied ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162269.14664-54-226459057335958/AnsiballZ_file.py'
Jan 23 09:57:49 compute-0 sudo[121322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:49 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:49 compute-0 python3.9[121324]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:49 compute-0 sudo[121322]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:49] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:57:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:49] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:57:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:57:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:57:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:57:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:57:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:57:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:57:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:57:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:57:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:50.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:57:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:50.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:50 compute-0 sudo[121499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lavkacpadkaygzglpnygvksjpnfokzat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162270.0822248-78-27562193732217/AnsiballZ_stat.py'
Jan 23 09:57:50 compute-0 sudo[121499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:50 compute-0 python3.9[121501]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:50 compute-0 sudo[121499]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:51 compute-0 sudo[121577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtkhrtdzdptbtugmxxmpepbpmucivmjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162270.0822248-78-27562193732217/AnsiballZ_file.py'
Jan 23 09:57:51 compute-0 sudo[121577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:51 compute-0 python3.9[121579]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.bsa7u1kd recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:51 compute-0 sudo[121577]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:51 compute-0 ceph-mon[74335]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:51 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:57:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:52.112926) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162272113080, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1295, "num_deletes": 252, "total_data_size": 2774480, "memory_usage": 2824528, "flush_reason": "Manual Compaction"}
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162272216373, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1767345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10918, "largest_seqno": 12212, "table_properties": {"data_size": 1762718, "index_size": 2087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11771, "raw_average_key_size": 20, "raw_value_size": 1752658, "raw_average_value_size": 2980, "num_data_blocks": 94, "num_entries": 588, "num_filter_entries": 588, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162146, "oldest_key_time": 1769162146, "file_creation_time": 1769162272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 103491 microseconds, and 7143 cpu microseconds.
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 09:57:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:52.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:52.216434) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1767345 bytes OK
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:52.216455) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:52.312777) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:52.312831) EVENT_LOG_v1 {"time_micros": 1769162272312822, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:52.312858) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2768859, prev total WAL file size 2768859, number of live WAL files 2.
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:52.314022) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1725KB)], [26(13MB)]
Jan 23 09:57:52 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162272314145, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 15548915, "oldest_snapshot_seqno": -1}
Jan 23 09:57:52 compute-0 sudo[121731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qowghcsgggbtotcxgnppdirulriewunl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162272.0592275-138-105665587463889/AnsiballZ_stat.py'
Jan 23 09:57:52 compute-0 sudo[121731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:57:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:52.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:57:52 compute-0 python3.9[121733]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:52 compute-0 sudo[121731]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:52 compute-0 sudo[121809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxukfxrpeqxnuolpqahkuhqarlzxevjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162272.0592275-138-105665587463889/AnsiballZ_file.py'
Jan 23 09:57:52 compute-0 sudo[121809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:53 compute-0 python3.9[121811]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=._sg36luk recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:53 compute-0 sudo[121809]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4242 keys, 13455068 bytes, temperature: kUnknown
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162273364007, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 13455068, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13422754, "index_size": 20620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 107869, "raw_average_key_size": 25, "raw_value_size": 13341246, "raw_average_value_size": 3145, "num_data_blocks": 884, "num_entries": 4242, "num_filter_entries": 4242, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769162272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:53.364280) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 13455068 bytes
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:53.412423) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 14.8 rd, 12.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.1 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(16.4) write-amplify(7.6) OK, records in: 4706, records dropped: 464 output_compression: NoCompression
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:53.412491) EVENT_LOG_v1 {"time_micros": 1769162273412454, "job": 10, "event": "compaction_finished", "compaction_time_micros": 1049952, "compaction_time_cpu_micros": 37412, "output_level": 6, "num_output_files": 1, "total_output_size": 13455068, "num_input_records": 4706, "num_output_records": 4242, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162273413088, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162273415885, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:52.313913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:53.415998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:53.416005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:53.416006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:53.416008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:57:53 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:57:53.416010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:57:53 compute-0 ceph-mon[74335]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:57:53 compute-0 sudo[121961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqglbgmfuksjpotlwpqwukjlrpxogfmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162273.305826-177-279631776313112/AnsiballZ_file.py'
Jan 23 09:57:53 compute-0 sudo[121961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:53 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:53 compute-0 python3.9[121963]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:57:53 compute-0 sudo[121961]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:54.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:54 compute-0 sudo[122115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpqpcueytgfpgmcocxvjblfirvxshfve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162274.0352168-201-136357335812966/AnsiballZ_stat.py'
Jan 23 09:57:54 compute-0 sudo[122115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:57:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:54.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:57:54 compute-0 python3.9[122117]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:54 compute-0 sudo[122115]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:54 compute-0 sudo[122193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzenbkdzavrvuqeimxetoivqperuwqwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162274.0352168-201-136357335812966/AnsiballZ_file.py'
Jan 23 09:57:54 compute-0 sudo[122193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:54 compute-0 python3.9[122195]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:57:54 compute-0 sudo[122193]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:55 compute-0 sudo[122345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nodkvqkeiicatxqaoghjncemqnmnxwyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162275.1168299-201-77624955405810/AnsiballZ_stat.py'
Jan 23 09:57:55 compute-0 sudo[122345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:55 compute-0 ceph-mon[74335]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:55 compute-0 python3.9[122347]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:55 compute-0 sudo[122345]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:55 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:55 compute-0 sudo[122424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izrftyondugnehmyjjivbfbdolxsljpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162275.1168299-201-77624955405810/AnsiballZ_file.py'
Jan 23 09:57:55 compute-0 sudo[122424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:56 compute-0 python3.9[122426]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:57:56 compute-0 sudo[122424]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:56.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:56.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:56 compute-0 sudo[122577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gakvvyxamhshcwksqbjqafyetuccvpye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162276.3927224-270-219708882293980/AnsiballZ_file.py'
Jan 23 09:57:56 compute-0 sudo[122577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:56 compute-0 python3.9[122579]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:56 compute-0 sudo[122577]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:57:56.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:57:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:57:57 compute-0 ceph-mon[74335]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:57 compute-0 sudo[122729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwialjaqvldpjbrufqsiyqbmrixlfcpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162277.0703623-294-159158257125560/AnsiballZ_stat.py'
Jan 23 09:57:57 compute-0 sudo[122729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:57 compute-0 sudo[122732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:57:57 compute-0 sudo[122732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:57:57 compute-0 sudo[122732]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:57 compute-0 python3.9[122731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:57 compute-0 sudo[122729]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:57 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:57 compute-0 sudo[122833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baslbxrzuqzoscawvtltbdphujwjgwdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162277.0703623-294-159158257125560/AnsiballZ_file.py'
Jan 23 09:57:57 compute-0 sudo[122833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:58 compute-0 python3.9[122835]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:58 compute-0 sudo[122833]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:57:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:58.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:57:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:57:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000066s ======
Jan 23 09:57:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:58.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000066s
Jan 23 09:57:58 compute-0 sudo[122986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srdlfsrbwqugsyhgzxskprxnpnnlktqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162278.2152796-330-20525495641942/AnsiballZ_stat.py'
Jan 23 09:57:58 compute-0 sudo[122986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:58 compute-0 python3.9[122988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:57:58 compute-0 sudo[122986]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:59 compute-0 sudo[123064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udzlbeyrotyzizyucroigizmgdzwqfgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162278.2152796-330-20525495641942/AnsiballZ_file.py'
Jan 23 09:57:59 compute-0 sudo[123064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:57:59 compute-0 python3.9[123066]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:57:59 compute-0 ceph-mon[74335]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:59 compute-0 sudo[123064]: pam_unix(sudo:session): session closed for user root
Jan 23 09:57:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:57:59 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:57:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:57:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:59] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:57:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:57:59] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:58:00 compute-0 sudo[123218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkjcjphhekvnjjspvtcjqptgdxrbxdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162279.3941877-366-64372274435588/AnsiballZ_systemd.py'
Jan 23 09:58:00 compute-0 sudo[123218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:58:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:00.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:58:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:00.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:00 compute-0 python3.9[123220]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:58:00 compute-0 systemd[1]: Reloading.
Jan 23 09:58:00 compute-0 systemd-rc-local-generator[123245]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:58:00 compute-0 systemd-sysv-generator[123251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:58:00 compute-0 sudo[123218]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:01 compute-0 sudo[123408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irutzpfhiybwykuarulxoopjxyuwywkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162280.9073412-390-213106920699114/AnsiballZ_stat.py'
Jan 23 09:58:01 compute-0 sudo[123408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:01 compute-0 ceph-mon[74335]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:01 compute-0 python3.9[123410]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:01 compute-0 sudo[123408]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:01 compute-0 sudo[123486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itxhqvvzsybjyzxrvopgalqcaevfgbiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162280.9073412-390-213106920699114/AnsiballZ_file.py'
Jan 23 09:58:01 compute-0 sudo[123486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:01 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:01 compute-0 python3.9[123488]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:01 compute-0 sudo[123486]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:58:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:58:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:02.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:58:02 compute-0 sudo[123640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxcqchzeydzwdtvszjmigxgykaevckoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162282.0505576-426-219661812343645/AnsiballZ_stat.py'
Jan 23 09:58:02 compute-0 sudo[123640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:58:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:02.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:58:02 compute-0 python3.9[123642]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:02 compute-0 sudo[123640]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:02 compute-0 sudo[123718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdoigxtnbtaywfyxfchevkbowthayaxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162282.0505576-426-219661812343645/AnsiballZ_file.py'
Jan 23 09:58:02 compute-0 sudo[123718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:03 compute-0 python3.9[123720]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:03 compute-0 sudo[123718]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:03 compute-0 ceph-mon[74335]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 09:58:03 compute-0 sudo[123870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrmejnukwlidtvrmzkuaiwfkalaahmvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162283.1922634-462-121905690937303/AnsiballZ_systemd.py'
Jan 23 09:58:03 compute-0 sudo[123870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:03 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:03 compute-0 python3.9[123872]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 09:58:03 compute-0 systemd[1]: Reloading.
Jan 23 09:58:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:03 compute-0 systemd-rc-local-generator[123902]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 09:58:03 compute-0 systemd-sysv-generator[123905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 09:58:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:04.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:04 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 09:58:04 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 09:58:04 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 09:58:04 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 09:58:04 compute-0 sudo[123870]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:58:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:04.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:58:04 compute-0 ceph-mon[74335]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:58:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:58:05 compute-0 python3.9[124065]: ansible-ansible.builtin.service_facts Invoked
Jan 23 09:58:05 compute-0 network[124082]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 09:58:05 compute-0 network[124083]: 'network-scripts' will be removed from distribution in near future.
Jan 23 09:58:05 compute-0 network[124084]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 09:58:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:05 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:58:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000033s ======
Jan 23 09:58:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:06.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 23 09:58:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000034s ======
Jan 23 09:58:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 23 09:58:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:58:06.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:58:07 compute-0 ceph-mon[74335]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:07 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:08.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:58:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:58:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095809 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:58:09 compute-0 sudo[124348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvrnmfqctyvrocgqaetsfsyepwecbalw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162288.8905017-540-163419283096179/AnsiballZ_stat.py'
Jan 23 09:58:09 compute-0 sudo[124348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:09 compute-0 ceph-mon[74335]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:09 compute-0 python3.9[124350]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:09 compute-0 sudo[124348]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:09 compute-0 sudo[124426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eskjbejlzciitavtfiqygryhtqhthors ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162288.8905017-540-163419283096179/AnsiballZ_file.py'
Jan 23 09:58:09 compute-0 sudo[124426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:09 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:09] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:58:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:09] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:58:09 compute-0 python3.9[124428]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:10 compute-0 sudo[124426]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:10.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:58:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:58:10 compute-0 sudo[124580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkaqvfwyqlvokvecdebztjvspiacosgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162290.2211094-579-195870607939576/AnsiballZ_file.py'
Jan 23 09:58:10 compute-0 sudo[124580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:10 compute-0 python3.9[124582]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:10 compute-0 sudo[124580]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:11 compute-0 sudo[124732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcekozddvtqfothehcawwqpkknrgxwxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162290.9207485-603-215436415925052/AnsiballZ_stat.py'
Jan 23 09:58:11 compute-0 sudo[124732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:11 compute-0 ceph-mon[74335]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:11 compute-0 python3.9[124734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:11 compute-0 sudo[124732]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:11 compute-0 sudo[124810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksvzhevicrscbysoboddbhazwbyyijuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162290.9207485-603-215436415925052/AnsiballZ_file.py'
Jan 23 09:58:11 compute-0 sudo[124810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:11 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:11 compute-0 python3.9[124812]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:11 compute-0 sudo[124810]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:58:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:12.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:58:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:12.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:58:12 compute-0 sshd-session[71222]: Received disconnect from 38.129.56.17 port 60458:11: disconnected by user
Jan 23 09:58:12 compute-0 sshd-session[71222]: Disconnected from user zuul 38.129.56.17 port 60458
Jan 23 09:58:12 compute-0 sshd-session[71219]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:58:12 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 23 09:58:12 compute-0 systemd[1]: session-18.scope: Consumed 1min 59.092s CPU time.
Jan 23 09:58:12 compute-0 systemd-logind[784]: Session 18 logged out. Waiting for processes to exit.
Jan 23 09:58:12 compute-0 systemd-logind[784]: Removed session 18.
Jan 23 09:58:13 compute-0 sudo[124964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-recwvvqbtxttrhxhoavcjogzlfbxqojo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162292.59683-648-193070257313146/AnsiballZ_timezone.py'
Jan 23 09:58:13 compute-0 sudo[124964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:13 compute-0 sshd-session[124967]: Connection closed by 117.187.106.248 port 57608
Jan 23 09:58:13 compute-0 python3.9[124966]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 09:58:13 compute-0 systemd[1]: Starting Time & Date Service...
Jan 23 09:58:13 compute-0 systemd[1]: Started Time & Date Service.
Jan 23 09:58:13 compute-0 sudo[124964]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:13 compute-0 ceph-mon[74335]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:58:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:13 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:58:13 compute-0 sudo[125124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwltqmpqfscmbxhpiunndlovuzgixgfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162293.735456-675-59353845754722/AnsiballZ_file.py'
Jan 23 09:58:13 compute-0 sudo[125124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:14 compute-0 python3.9[125126]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:14 compute-0 sudo[125124]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:58:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:14.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:58:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:14.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:14 compute-0 sudo[125277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdtzhxejuxxskjarjuokcvrxxjrofyaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162294.4162295-699-203456574277492/AnsiballZ_stat.py'
Jan 23 09:58:14 compute-0 sudo[125277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:14 compute-0 ceph-mon[74335]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:58:14 compute-0 python3.9[125279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:14 compute-0 sudo[125277]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:15 compute-0 sudo[125355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzryayhhcpakvvjituvodhwtubeycqlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162294.4162295-699-203456574277492/AnsiballZ_file.py'
Jan 23 09:58:15 compute-0 sudo[125355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:15 compute-0 python3.9[125357]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:15 compute-0 sudo[125355]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:15 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:15 compute-0 sudo[125508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubgmouiddzpknetwthonqyhizlyhruyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162295.5755246-735-62061263590893/AnsiballZ_stat.py'
Jan 23 09:58:15 compute-0 sudo[125508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:58:16 compute-0 python3.9[125510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:16 compute-0 sudo[125508]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:16.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:58:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:16.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:58:16 compute-0 sudo[125587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezbzcvpmlqibcikapkkghvgdyhzjnukl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162295.5755246-735-62061263590893/AnsiballZ_file.py'
Jan 23 09:58:16 compute-0 sudo[125587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:16 compute-0 python3.9[125589]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.nt0qn1iy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:16 compute-0 sudo[125587]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:58:16.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 09:58:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:58:16.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 09:58:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:58:16.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 09:58:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:17 compute-0 sudo[125739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktgencdtjkqksopkglnkwtosafvudyux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162296.9195545-771-151322227464130/AnsiballZ_stat.py'
Jan 23 09:58:17 compute-0 sudo[125739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:17 compute-0 python3.9[125741]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:17 compute-0 sudo[125739]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:17 compute-0 ceph-mon[74335]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:58:17 compute-0 sudo[125792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:58:17 compute-0 sudo[125792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:17 compute-0 sudo[125840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdrtqejzkzpnbllilqxplteblthxauod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162296.9195545-771-151322227464130/AnsiballZ_file.py'
Jan 23 09:58:17 compute-0 sudo[125840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:17 compute-0 sudo[125792]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:17 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:17 compute-0 python3.9[125844]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:17 compute-0 sudo[125840]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:58:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:18.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:58:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:18.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:58:18 compute-0 sudo[125996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjkdzbstltlwuookorjnwgrstkleyayc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162298.150272-810-50424943313694/AnsiballZ_command.py'
Jan 23 09:58:18 compute-0 sudo[125996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:18 compute-0 python3.9[125998]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:58:18 compute-0 sudo[125996]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:19 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:58:19 compute-0 ceph-mon[74335]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:58:19 compute-0 sudo[126149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvmctzxkpzgjxsnvdghczetywbfigzzg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769162299.0848768-834-185911348898223/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 09:58:19 compute-0 sudo[126149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:19 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:19 compute-0 python3[126151]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 09:58:19 compute-0 sudo[126149]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:58:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:58:19
Jan 23 09:58:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:58:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 09:58:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.nfs', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Jan 23 09:58:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 09:58:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:19] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:58:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:19] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:58:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:58:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:58:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:58:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:58:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:20.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:20 compute-0 sudo[126304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqtmkbykaaquxqawteukmiflkdjoskvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162300.013478-858-136638792187817/AnsiballZ_stat.py'
Jan 23 09:58:20 compute-0 sudo[126304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:58:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:20.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:58:20 compute-0 python3.9[126306]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:20 compute-0 sudo[126304]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:58:20 compute-0 sudo[126382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qynngrfoghuywnkoilytemfhyjnvbzoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162300.013478-858-136638792187817/AnsiballZ_file.py'
Jan 23 09:58:20 compute-0 sudo[126382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:21 compute-0 python3.9[126384]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:21 compute-0 sudo[126382]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:21 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:58:21 compute-0 sudo[126535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvsvrucmducjjohodqmbhpgdpqzscfik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162301.4285462-894-116855115793077/AnsiballZ_stat.py'
Jan 23 09:58:21 compute-0 sudo[126535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:22 compute-0 python3.9[126537]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:22 compute-0 sudo[126535]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:22 compute-0 ceph-mon[74335]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:58:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:58:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:22.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:58:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78002320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:58:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:22.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:58:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:58:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:58:22 compute-0 sudo[126661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxoaankonhnqprtzyaaoflgkzszzizul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162301.4285462-894-116855115793077/AnsiballZ_copy.py'
Jan 23 09:58:22 compute-0 sudo[126661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:22 compute-0 python3.9[126663]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162301.4285462-894-116855115793077/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:22 compute-0 sudo[126661]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:22 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 09:58:22 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 09:58:23 compute-0 sudo[126814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvyxnyaqrgfwiavxnnmiiyhpfjuvrroo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162303.042288-939-125138439778440/AnsiballZ_stat.py'
Jan 23 09:58:23 compute-0 sudo[126814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:23 compute-0 python3.9[126816]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:23 compute-0 sudo[126814]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:23 compute-0 ceph-mon[74335]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:58:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:23 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:23 compute-0 sshd-session[124970]: error: kex_exchange_identification: read: Connection timed out
Jan 23 09:58:23 compute-0 sshd-session[124970]: banner exchange: Connection from 117.187.106.248 port 57624: Connection timed out
Jan 23 09:58:23 compute-0 sudo[126893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjgntcrpryilomvtmueryxonnohxkjvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162303.042288-939-125138439778440/AnsiballZ_file.py'
Jan 23 09:58:23 compute-0 sudo[126893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:58:24 compute-0 python3.9[126895]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:24 compute-0 sudo[126893]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:24.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:24.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:24 compute-0 sudo[127046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlydvrttqamhjcmamaeihuxrhthxxxkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162304.4348435-975-34005080219343/AnsiballZ_stat.py'
Jan 23 09:58:24 compute-0 sudo[127046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:24 compute-0 python3.9[127048]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:24 compute-0 sudo[127046]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:25 compute-0 ceph-mon[74335]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:58:25 compute-0 sudo[127124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phqnvjywhzkbnkjahzqdhfbgzctormqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162304.4348435-975-34005080219343/AnsiballZ_file.py'
Jan 23 09:58:25 compute-0 sudo[127124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:25 compute-0 python3.9[127126]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:25 compute-0 sudo[127124]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:25 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 09:58:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:25 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78002320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:58:26 compute-0 sudo[127278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzwtevapkrrkdrzvpixmkhgdhaeqjeph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162305.6748574-1011-162392766394426/AnsiballZ_stat.py'
Jan 23 09:58:26 compute-0 sudo[127278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:26 compute-0 python3.9[127280]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:26.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:26 compute-0 sudo[127278]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:26.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:26 compute-0 sudo[127356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wznzgqqpbdvnpwondwmgahmddhgjyzaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162305.6748574-1011-162392766394426/AnsiballZ_file.py'
Jan 23 09:58:26 compute-0 sudo[127356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:26 compute-0 python3.9[127358]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:26 compute-0 sudo[127356]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:58:26.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:58:27 compute-0 sudo[127508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvlmmobwqzltnvnzzrbutcqxqkadpfhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162307.0003543-1050-171753944770946/AnsiballZ_command.py'
Jan 23 09:58:27 compute-0 sudo[127508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:27 compute-0 ceph-mon[74335]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 09:58:27 compute-0 python3.9[127510]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:58:27 compute-0 sudo[127508]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:27 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1020 B/s wr, 3 op/s
Jan 23 09:58:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:28 compute-0 sudo[127665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbeimncxdnvaoiobngwmlecffdkxvcep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162307.7014692-1074-211953723284996/AnsiballZ_blockinfile.py'
Jan 23 09:58:28 compute-0 sudo[127665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:28.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:28 compute-0 python3.9[127667]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:28 compute-0 sudo[127665]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:58:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:28.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:58:28 compute-0 sudo[127817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcxelbrmgxqvnlqvxrcbkngyyrunhajw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162308.6630661-1101-249780377511937/AnsiballZ_file.py'
Jan 23 09:58:28 compute-0 sudo[127817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:29 compute-0 python3.9[127819]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:29 compute-0 sudo[127817]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:29 compute-0 sudo[127820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:58:29 compute-0 sudo[127820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:29 compute-0 sudo[127820]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:29 compute-0 sudo[127869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 23 09:58:29 compute-0 sudo[127869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:29 compute-0 sudo[128035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diqcnxggsvjmppchkfqbclhcklnkgfda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162309.3066204-1101-247266668980758/AnsiballZ_file.py'
Jan 23 09:58:29 compute-0 sudo[128035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:29 compute-0 sudo[127869]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:58:29 compute-0 ceph-mon[74335]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1020 B/s wr, 3 op/s
Jan 23 09:58:29 compute-0 python3.9[128039]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:29 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:29 compute-0 sudo[128035]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 935 B/s wr, 2 op/s
Jan 23 09:58:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:29] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 09:58:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:29] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 09:58:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:58:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:30 compute-0 sudo[128104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:58:30 compute-0 sudo[128104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:30 compute-0 sudo[128104]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:30 compute-0 sudo[128147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 09:58:30 compute-0 sudo[128147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:58:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:30.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:58:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:30.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:30 compute-0 sudo[128259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thbjjaznfbnngotdewqmjyjraqqqzyol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162310.0123415-1146-84792955580655/AnsiballZ_mount.py'
Jan 23 09:58:30 compute-0 sudo[128259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:30 compute-0 sudo[128147]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:58:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:58:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:58:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:58:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:58:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:58:30 compute-0 python3.9[128263]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 09:58:30 compute-0 sudo[128259]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 09:58:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:58:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:58:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:58:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:58:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:58:30 compute-0 sudo[128323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:58:30 compute-0 sudo[128323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:30 compute-0 sudo[128323]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:30 compute-0 sudo[128376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 09:58:30 compute-0 sudo[128376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:31 compute-0 ceph-mon[74335]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 935 B/s wr, 2 op/s
Jan 23 09:58:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:58:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:58:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:58:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:58:31 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:58:31 compute-0 sudo[128477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezaikpolmywxqmvekdqeqlwnrycuukef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162310.8590398-1146-40253099278311/AnsiballZ_mount.py'
Jan 23 09:58:31 compute-0 sudo[128477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095831 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:58:31 compute-0 python3.9[128479]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 09:58:31 compute-0 sudo[128477]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:31 compute-0 podman[128521]: 2026-01-23 09:58:31.41688233 +0000 UTC m=+0.073261259 container create 69b4e36e8852ce5d4bc59be7555f8de7da642f3d37a68707daada7670b002d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shamir, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 09:58:31 compute-0 systemd[1]: Started libpod-conmon-69b4e36e8852ce5d4bc59be7555f8de7da642f3d37a68707daada7670b002d0e.scope.
Jan 23 09:58:31 compute-0 podman[128521]: 2026-01-23 09:58:31.370931076 +0000 UTC m=+0.027310025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:58:31 compute-0 podman[128521]: 2026-01-23 09:58:31.55539169 +0000 UTC m=+0.211770629 container init 69b4e36e8852ce5d4bc59be7555f8de7da642f3d37a68707daada7670b002d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shamir, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 09:58:31 compute-0 podman[128521]: 2026-01-23 09:58:31.563736866 +0000 UTC m=+0.220115795 container start 69b4e36e8852ce5d4bc59be7555f8de7da642f3d37a68707daada7670b002d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:58:31 compute-0 podman[128521]: 2026-01-23 09:58:31.568201988 +0000 UTC m=+0.224580947 container attach 69b4e36e8852ce5d4bc59be7555f8de7da642f3d37a68707daada7670b002d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shamir, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:58:31 compute-0 epic_shamir[128558]: 167 167
Jan 23 09:58:31 compute-0 systemd[1]: libpod-69b4e36e8852ce5d4bc59be7555f8de7da642f3d37a68707daada7670b002d0e.scope: Deactivated successfully.
Jan 23 09:58:31 compute-0 podman[128521]: 2026-01-23 09:58:31.570903537 +0000 UTC m=+0.227282466 container died 69b4e36e8852ce5d4bc59be7555f8de7da642f3d37a68707daada7670b002d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 09:58:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-16f945c3c42357ec002d0243904996a78195754eb35981be011e58942918b009-merged.mount: Deactivated successfully.
Jan 23 09:58:31 compute-0 podman[128521]: 2026-01-23 09:58:31.614162372 +0000 UTC m=+0.270541301 container remove 69b4e36e8852ce5d4bc59be7555f8de7da642f3d37a68707daada7670b002d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shamir, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:58:31 compute-0 systemd[1]: libpod-conmon-69b4e36e8852ce5d4bc59be7555f8de7da642f3d37a68707daada7670b002d0e.scope: Deactivated successfully.
Jan 23 09:58:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:31 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:31 compute-0 podman[128587]: 2026-01-23 09:58:31.790435035 +0000 UTC m=+0.059797203 container create d8d2487e8d284050296e4a595d7abb4a4e1121086d666553092e5803c687707f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jang, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 09:58:31 compute-0 sshd-session[121015]: Connection closed by 192.168.122.30 port 58418
Jan 23 09:58:31 compute-0 systemd[1]: Started libpod-conmon-d8d2487e8d284050296e4a595d7abb4a4e1121086d666553092e5803c687707f.scope.
Jan 23 09:58:31 compute-0 sshd-session[121012]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:58:31 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 23 09:58:31 compute-0 systemd[1]: session-44.scope: Consumed 30.062s CPU time.
Jan 23 09:58:31 compute-0 systemd-logind[784]: Session 44 logged out. Waiting for processes to exit.
Jan 23 09:58:31 compute-0 systemd-logind[784]: Removed session 44.
Jan 23 09:58:31 compute-0 podman[128587]: 2026-01-23 09:58:31.75599127 +0000 UTC m=+0.025353448 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50852333e851987b922eeb2a44cbfa8f6bc676d00cdcfa4c20ebda2b9a2cfd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50852333e851987b922eeb2a44cbfa8f6bc676d00cdcfa4c20ebda2b9a2cfd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50852333e851987b922eeb2a44cbfa8f6bc676d00cdcfa4c20ebda2b9a2cfd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50852333e851987b922eeb2a44cbfa8f6bc676d00cdcfa4c20ebda2b9a2cfd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f50852333e851987b922eeb2a44cbfa8f6bc676d00cdcfa4c20ebda2b9a2cfd2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:31 compute-0 podman[128587]: 2026-01-23 09:58:31.892203873 +0000 UTC m=+0.161566051 container init d8d2487e8d284050296e4a595d7abb4a4e1121086d666553092e5803c687707f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 09:58:31 compute-0 podman[128587]: 2026-01-23 09:58:31.899678133 +0000 UTC m=+0.169040291 container start d8d2487e8d284050296e4a595d7abb4a4e1121086d666553092e5803c687707f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 09:58:31 compute-0 podman[128587]: 2026-01-23 09:58:31.903767343 +0000 UTC m=+0.173129531 container attach d8d2487e8d284050296e4a595d7abb4a4e1121086d666553092e5803c687707f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jang, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:58:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 935 B/s wr, 2 op/s
Jan 23 09:58:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:32 compute-0 xenodochial_jang[128603]: --> passed data devices: 0 physical, 1 LVM
Jan 23 09:58:32 compute-0 xenodochial_jang[128603]: --> All data devices are unavailable
Jan 23 09:58:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:32 compute-0 systemd[1]: libpod-d8d2487e8d284050296e4a595d7abb4a4e1121086d666553092e5803c687707f.scope: Deactivated successfully.
Jan 23 09:58:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:32.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:32 compute-0 podman[128619]: 2026-01-23 09:58:32.326026163 +0000 UTC m=+0.026789850 container died d8d2487e8d284050296e4a595d7abb4a4e1121086d666553092e5803c687707f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jang, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f50852333e851987b922eeb2a44cbfa8f6bc676d00cdcfa4c20ebda2b9a2cfd2-merged.mount: Deactivated successfully.
Jan 23 09:58:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:58:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:32.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:58:32 compute-0 podman[128619]: 2026-01-23 09:58:32.406932866 +0000 UTC m=+0.107696543 container remove d8d2487e8d284050296e4a595d7abb4a4e1121086d666553092e5803c687707f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jang, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 09:58:32 compute-0 systemd[1]: libpod-conmon-d8d2487e8d284050296e4a595d7abb4a4e1121086d666553092e5803c687707f.scope: Deactivated successfully.
Jan 23 09:58:32 compute-0 sudo[128376]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:32 compute-0 sudo[128634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:58:32 compute-0 sudo[128634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:32 compute-0 sudo[128634]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:32 compute-0 sudo[128659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 09:58:32 compute-0 sudo[128659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:33 compute-0 podman[128724]: 2026-01-23 09:58:33.004188411 +0000 UTC m=+0.042239135 container create 98e50cff2e05ab24a4c32846a69ecfdb1c2e9f3c09d5387a28fa078b66514426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_moore, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:58:33 compute-0 systemd[1]: Started libpod-conmon-98e50cff2e05ab24a4c32846a69ecfdb1c2e9f3c09d5387a28fa078b66514426.scope.
Jan 23 09:58:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:58:33 compute-0 podman[128724]: 2026-01-23 09:58:33.075073509 +0000 UTC m=+0.113124253 container init 98e50cff2e05ab24a4c32846a69ecfdb1c2e9f3c09d5387a28fa078b66514426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 23 09:58:33 compute-0 podman[128724]: 2026-01-23 09:58:33.082450107 +0000 UTC m=+0.120500831 container start 98e50cff2e05ab24a4c32846a69ecfdb1c2e9f3c09d5387a28fa078b66514426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_moore, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:58:33 compute-0 podman[128724]: 2026-01-23 09:58:32.988654643 +0000 UTC m=+0.026705397 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:58:33 compute-0 trusting_moore[128741]: 167 167
Jan 23 09:58:33 compute-0 podman[128724]: 2026-01-23 09:58:33.085843387 +0000 UTC m=+0.123894141 container attach 98e50cff2e05ab24a4c32846a69ecfdb1c2e9f3c09d5387a28fa078b66514426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_moore, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:58:33 compute-0 systemd[1]: libpod-98e50cff2e05ab24a4c32846a69ecfdb1c2e9f3c09d5387a28fa078b66514426.scope: Deactivated successfully.
Jan 23 09:58:33 compute-0 podman[128724]: 2026-01-23 09:58:33.087532836 +0000 UTC m=+0.125583580 container died 98e50cff2e05ab24a4c32846a69ecfdb1c2e9f3c09d5387a28fa078b66514426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4608a5eefee97f67abfed0bd75b320004608795d8bf6e74b77ad0d5ff7861cb7-merged.mount: Deactivated successfully.
Jan 23 09:58:33 compute-0 podman[128724]: 2026-01-23 09:58:33.129108591 +0000 UTC m=+0.167159315 container remove 98e50cff2e05ab24a4c32846a69ecfdb1c2e9f3c09d5387a28fa078b66514426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 09:58:33 compute-0 systemd[1]: libpod-conmon-98e50cff2e05ab24a4c32846a69ecfdb1c2e9f3c09d5387a28fa078b66514426.scope: Deactivated successfully.
Jan 23 09:58:33 compute-0 ceph-mon[74335]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 935 B/s wr, 2 op/s
Jan 23 09:58:33 compute-0 podman[128762]: 2026-01-23 09:58:33.284092427 +0000 UTC m=+0.044588915 container create 39df01706ed4bdd1720103d98b90f8263ca9a9625561b8f40efbf8e233a30ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_faraday, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:58:33 compute-0 systemd[1]: Started libpod-conmon-39df01706ed4bdd1720103d98b90f8263ca9a9625561b8f40efbf8e233a30ce7.scope.
Jan 23 09:58:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98e47a7a4343fcc8e5be989e07e7b31108d0584b4ed770d1a8eb0c04174ab1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98e47a7a4343fcc8e5be989e07e7b31108d0584b4ed770d1a8eb0c04174ab1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98e47a7a4343fcc8e5be989e07e7b31108d0584b4ed770d1a8eb0c04174ab1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98e47a7a4343fcc8e5be989e07e7b31108d0584b4ed770d1a8eb0c04174ab1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:33 compute-0 podman[128762]: 2026-01-23 09:58:33.355307865 +0000 UTC m=+0.115804373 container init 39df01706ed4bdd1720103d98b90f8263ca9a9625561b8f40efbf8e233a30ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:58:33 compute-0 podman[128762]: 2026-01-23 09:58:33.266603362 +0000 UTC m=+0.027099870 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:58:33 compute-0 podman[128762]: 2026-01-23 09:58:33.362676552 +0000 UTC m=+0.123173040 container start 39df01706ed4bdd1720103d98b90f8263ca9a9625561b8f40efbf8e233a30ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_faraday, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 09:58:33 compute-0 podman[128762]: 2026-01-23 09:58:33.366866735 +0000 UTC m=+0.127363313 container attach 39df01706ed4bdd1720103d98b90f8263ca9a9625561b8f40efbf8e233a30ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_faraday, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 09:58:33 compute-0 strange_faraday[128778]: {
Jan 23 09:58:33 compute-0 strange_faraday[128778]:     "1": [
Jan 23 09:58:33 compute-0 strange_faraday[128778]:         {
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "devices": [
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "/dev/loop3"
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             ],
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "lv_name": "ceph_lv0",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "lv_size": "21470642176",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "name": "ceph_lv0",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "tags": {
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.cluster_name": "ceph",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.crush_device_class": "",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.encrypted": "0",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.osd_id": "1",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.type": "block",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.vdo": "0",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:                 "ceph.with_tpm": "0"
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             },
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "type": "block",
Jan 23 09:58:33 compute-0 strange_faraday[128778]:             "vg_name": "ceph_vg0"
Jan 23 09:58:33 compute-0 strange_faraday[128778]:         }
Jan 23 09:58:33 compute-0 strange_faraday[128778]:     ]
Jan 23 09:58:33 compute-0 strange_faraday[128778]: }
Jan 23 09:58:33 compute-0 systemd[1]: libpod-39df01706ed4bdd1720103d98b90f8263ca9a9625561b8f40efbf8e233a30ce7.scope: Deactivated successfully.
Jan 23 09:58:33 compute-0 podman[128762]: 2026-01-23 09:58:33.705230833 +0000 UTC m=+0.465727341 container died 39df01706ed4bdd1720103d98b90f8263ca9a9625561b8f40efbf8e233a30ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98e47a7a4343fcc8e5be989e07e7b31108d0584b4ed770d1a8eb0c04174ab1f-merged.mount: Deactivated successfully.
Jan 23 09:58:33 compute-0 podman[128762]: 2026-01-23 09:58:33.755578427 +0000 UTC m=+0.516074915 container remove 39df01706ed4bdd1720103d98b90f8263ca9a9625561b8f40efbf8e233a30ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_faraday, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 09:58:33 compute-0 systemd[1]: libpod-conmon-39df01706ed4bdd1720103d98b90f8263ca9a9625561b8f40efbf8e233a30ce7.scope: Deactivated successfully.
Jan 23 09:58:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:33 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:33 compute-0 sudo[128659]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:33 compute-0 sudo[128801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:58:33 compute-0 sudo[128801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:33 compute-0 sudo[128801]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 09:58:33 compute-0 sudo[128826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 09:58:33 compute-0 sudo[128826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:34.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:34 compute-0 podman[128893]: 2026-01-23 09:58:34.395801548 +0000 UTC m=+0.039750163 container create c046a166e22e1ab2c910d442b371668265a16e010407a369a85d0a96777b465b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 23 09:58:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:34 compute-0 systemd[1]: Started libpod-conmon-c046a166e22e1ab2c910d442b371668265a16e010407a369a85d0a96777b465b.scope.
Jan 23 09:58:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:58:34 compute-0 podman[128893]: 2026-01-23 09:58:34.46410569 +0000 UTC m=+0.108054325 container init c046a166e22e1ab2c910d442b371668265a16e010407a369a85d0a96777b465b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_newton, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:58:34 compute-0 podman[128893]: 2026-01-23 09:58:34.471711114 +0000 UTC m=+0.115659719 container start c046a166e22e1ab2c910d442b371668265a16e010407a369a85d0a96777b465b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:58:34 compute-0 magical_newton[128909]: 167 167
Jan 23 09:58:34 compute-0 systemd[1]: libpod-c046a166e22e1ab2c910d442b371668265a16e010407a369a85d0a96777b465b.scope: Deactivated successfully.
Jan 23 09:58:34 compute-0 podman[128893]: 2026-01-23 09:58:34.476501395 +0000 UTC m=+0.120450040 container attach c046a166e22e1ab2c910d442b371668265a16e010407a369a85d0a96777b465b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 09:58:34 compute-0 podman[128893]: 2026-01-23 09:58:34.380753004 +0000 UTC m=+0.024701639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:58:34 compute-0 podman[128893]: 2026-01-23 09:58:34.477313219 +0000 UTC m=+0.121261834 container died c046a166e22e1ab2c910d442b371668265a16e010407a369a85d0a96777b465b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-80061e5174e143af6f17fa2f2d0dc40ee3a192785affcd99f82ed5ba4cfe7968-merged.mount: Deactivated successfully.
Jan 23 09:58:34 compute-0 podman[128893]: 2026-01-23 09:58:34.523439118 +0000 UTC m=+0.167387733 container remove c046a166e22e1ab2c910d442b371668265a16e010407a369a85d0a96777b465b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_newton, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:58:34 compute-0 systemd[1]: libpod-conmon-c046a166e22e1ab2c910d442b371668265a16e010407a369a85d0a96777b465b.scope: Deactivated successfully.
Jan 23 09:58:34 compute-0 podman[128933]: 2026-01-23 09:58:34.681799603 +0000 UTC m=+0.043107951 container create 33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:58:34 compute-0 systemd[1]: Started libpod-conmon-33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442.scope.
Jan 23 09:58:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9157b7d07e085746c903d650209e0693ff6d03e4c5d85b9bddfad7091a35a46a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9157b7d07e085746c903d650209e0693ff6d03e4c5d85b9bddfad7091a35a46a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9157b7d07e085746c903d650209e0693ff6d03e4c5d85b9bddfad7091a35a46a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9157b7d07e085746c903d650209e0693ff6d03e4c5d85b9bddfad7091a35a46a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:58:34 compute-0 podman[128933]: 2026-01-23 09:58:34.662244767 +0000 UTC m=+0.023553145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:58:34 compute-0 podman[128933]: 2026-01-23 09:58:34.760171231 +0000 UTC m=+0.121479609 container init 33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 09:58:34 compute-0 podman[128933]: 2026-01-23 09:58:34.767122826 +0000 UTC m=+0.128431184 container start 33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 09:58:34 compute-0 podman[128933]: 2026-01-23 09:58:34.770693891 +0000 UTC m=+0.132002259 container attach 33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 09:58:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:58:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:58:35 compute-0 ceph-mon[74335]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 09:58:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:58:35 compute-0 lvm[129023]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:58:35 compute-0 lvm[129023]: VG ceph_vg0 finished
Jan 23 09:58:35 compute-0 heuristic_herschel[128949]: {}
Jan 23 09:58:35 compute-0 systemd[1]: libpod-33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442.scope: Deactivated successfully.
Jan 23 09:58:35 compute-0 systemd[1]: libpod-33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442.scope: Consumed 1.235s CPU time.
Jan 23 09:58:35 compute-0 podman[128933]: 2026-01-23 09:58:35.535019968 +0000 UTC m=+0.896328346 container died 33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9157b7d07e085746c903d650209e0693ff6d03e4c5d85b9bddfad7091a35a46a-merged.mount: Deactivated successfully.
Jan 23 09:58:35 compute-0 podman[128933]: 2026-01-23 09:58:35.594926282 +0000 UTC m=+0.956234630 container remove 33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:58:35 compute-0 systemd[1]: libpod-conmon-33a195446a9b0cdf2df6fbca5f869feb98d0293f03e91d7f8165633525363442.scope: Deactivated successfully.
Jan 23 09:58:35 compute-0 sudo[128826]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:58:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:58:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:35 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 09:58:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:36 compute-0 sudo[129038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:58:36 compute-0 sudo[129038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:36 compute-0 sudo[129038]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:58:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:36.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:58:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:36.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:36 compute-0 ceph-mon[74335]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 09:58:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:58:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:58:36.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 09:58:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:58:36.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:58:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 09:58:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:37 compute-0 sshd-session[129064]: Accepted publickey for zuul from 192.168.122.30 port 52444 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:58:37 compute-0 systemd-logind[784]: New session 45 of user zuul.
Jan 23 09:58:37 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 23 09:58:37 compute-0 sshd-session[129064]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:58:37 compute-0 sudo[129074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:58:37 compute-0 sudo[129074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:37 compute-0 sudo[129074]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:37 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 09:58:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:38 compute-0 sudo[129244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwjcbfwrjqzkpdesrpibrrdoqbjhigl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162317.6857872-18-26232898707048/AnsiballZ_tempfile.py'
Jan 23 09:58:38 compute-0 sudo[129244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:58:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:38.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:58:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:38.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:38 compute-0 python3.9[129246]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 23 09:58:38 compute-0 sudo[129244]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:39 compute-0 sudo[129396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jehszcvftkliaglhjpiddoyqovbbuizx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162318.6094935-54-2716940102785/AnsiballZ_stat.py'
Jan 23 09:58:39 compute-0 sudo[129396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:39 compute-0 python3.9[129398]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:58:39 compute-0 ceph-mon[74335]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 09:58:39 compute-0 sudo[129396]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:39 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:39 compute-0 sudo[129551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqkscpbviduypbtmhtrpfmlvekvxwtlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162319.491458-78-33851710869930/AnsiballZ_slurp.py'
Jan 23 09:58:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:58:39 compute-0 sudo[129551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:39] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Jan 23 09:58:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:39] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Jan 23 09:58:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:40 compute-0 python3.9[129553]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 23 09:58:40 compute-0 sudo[129551]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:58:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:40.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:58:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:58:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:40.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:58:40 compute-0 sudo[129704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzpwoohltxebuqdgqigyxkvmsticsyqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162320.359095-102-152087959218604/AnsiballZ_stat.py'
Jan 23 09:58:40 compute-0 sudo[129704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:40 compute-0 python3.9[129706]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.79a2arj3 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:58:40 compute-0 sudo[129704]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:41 compute-0 ceph-mon[74335]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:58:41 compute-0 sudo[129829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onjadyqwaersxfnlcwkbsdbswbkvsqfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162320.359095-102-152087959218604/AnsiballZ_copy.py'
Jan 23 09:58:41 compute-0 sudo[129829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:41 compute-0 python3.9[129831]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.79a2arj3 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162320.359095-102-152087959218604/.source.79a2arj3 _original_basename=.x9vt21lx follow=False checksum=6c63675b4fda7e0d01c328fcbe34dc890491aeeb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:41 compute-0 sudo[129829]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:41 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:58:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:42.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:42 compute-0 sudo[129983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfcmxgctsaarnwhorbaezdydzvjyklae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162321.8657339-147-164279114089804/AnsiballZ_setup.py'
Jan 23 09:58:42 compute-0 sudo[129983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:42 compute-0 ceph-mon[74335]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:58:42 compute-0 python3.9[129985]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:58:42 compute-0 sudo[129983]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:43 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 09:58:43 compute-0 sudo[130137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yblefvxroftrpowpudqwbibkvcbguzoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162323.1508493-172-62350532564635/AnsiballZ_blockinfile.py'
Jan 23 09:58:43 compute-0 sudo[130137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:43 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:43 compute-0 python3.9[130139]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+cj2so8SS29oYZ1K+7e02qi6fVkGXJzGMkIN9mgJPLCBtQ6vpBYEObTZZXuMIHhdiMUAp6RDjs11OXDkAB9R7e2ncjMKn7J2EHbmceT7rNq9L0w+QaLKFxl+xdJQ9QtO9ioNgJFXXQZt/IOeE8S4I5yhEM5jn+YEW0LPbp99Wz1d1Ob4GI1t0hCEv/4ayC3nRIXkuIhl7mrV0s22F8NE8f0hZZKaw1u8xmmpbD8ZVBsC6cxWE3kIQBmHu8q9tylaZjLsjGxBDUF9ko3bxeppvLPDMem89VLQCWbgmOHl5ZIPsyNglusTIBUp8uA7g+Agz1uMojClMHnsZl68WjbCAVcRA9y/UgXphGyEYZCUJMv8CjYKzxriyHALZl6YFSyC5ELlEAxL8fyTwtXhQ1+e/lI9Ak3n4suC6JyH0NQ27MPIf7riyUFJLw9lZaDerZOkvI7/Y2PfRvdfyZ57g/xgGeLY0Ch30SFVC04lNXIpsOWbLBOg0BMP9ZiciAYAF9Yc=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIreWuVcekgp7kF5pU+4TIKLHZyhuqd4Ly312ExEA5EG
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJWfXOTsTXqDhdGhW7VcUXsYqCS7TzCPyaa9/dA9e0xKjnni1/GRM8FdYXWYbGsNnBQFWk3/pXD6sj3jKzK34AM=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDA/6JnQZ3CFC7xgv4DrvdZizVbVnsolKcWkvqzGu1hFHGmOEb7ehbxGPHBnp2N9iRf13H12EI0qNI6A2f44V0oXE3SP+fpJ6PVYQRQpKqTEiweqZaHEyYE2FnKy0HDQisg5hwr1egYLjGXChdkyqWSokL1LqaCyD2+EcOzUvC/GuVQ7eQnQBIGBpYAnNzS/64KKOZ0+0soOPJGxVCma6JN/2GcCunX6j3HmkOOQeuEFETXfUPHh1ylu2+3yINl34ERJN5YwgR/S+BKENOsJTu5XkYTCvc90CuvfkoF9K5Y2yE5nKwZaSf7n2SbUPil2Zph4l7opsd5IKxi6k2mVzw/CO2NHr136BZ06+sKXytDgorWqWzqnci8zfxeYF3D7q7AXD+IDVMP5T6op93oS2enAQFHG1vTLB0otQqnxUgNANbJkrKgXAS8G8I1m2sPz+qOFuuZa2/nqhzrd6/DEur5VoW6n9c/OcrbfapLEzD1jQDmsQI7oZkT++dt3Ogb3Vk=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIII1sLqY7Nqi1A3CKXLokfn1vrns/lK1gUkDNSlbek2o
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM9QZXHUsthFMKA5Si4Htl7MIwK0G4VAltQgbo39JJHrgD7h27U1jbnuJQ1S2bBX8FMSkqf5TPmM7Gr9QOATO+4=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWbrXZxuAw0n/xJmOvWW/Qbg53ya2CuJKzcHA+OvDpHLHGxkEuiUhwKvqUbfSTzn0o1M00OYITJIvZVINGRtQC7hGvBPWLVBON097mcmnju857I72U3dGdvGhnEUHyrglCV+xSkafQTTlnY9B59EKImUs/kiwRy3cYDWkCgthJgiPA4QSw6WrzaqpY2ET+7n+yY31EOagGA3ufW43qFbHX4diFuXpS1I1PLvvA4KINlMlsFcyR29j4nQk/vb5hMpLmBOlfVH16CXZC98a0ltp9ib7F3e1Wjdogj92kxwfQMYIeQEBp11Tc/PY5U90J51oyk8xYOKfsP3+r9yczmfRDjwR3+tzUMKyZYAsKQVcOGQC7x9sEXg3mBeXRVrlIVZFMuNVcYq4CY40fDIybcI25GxgRbQR7ZUWODG1SL7RF02Z+LQB6APXkzxdQUWLWPryj/EtOgnHQ1I0+BJTWrqGkKbSj41jhRTfS+MZvRXAJ+fNyZFhpkHo54DrCii4cbyM=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRPkwTcFVg/dIKRq29iWBfkoVFqIQ1pXOCPxfcGWRFF
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGf/hJ2dg/PRwojw63FLyKqua+ChKP+2bc7Eb0p70H6ve1elFVeY8lVRXx33JWc2m/XfgSWPNcUs9zBG8QcFVak=
                                              create=True mode=0644 path=/tmp/ansible.79a2arj3 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:43 compute-0 sudo[130137]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:44.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:58:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:44.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:58:44 compute-0 sudo[130291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxcgkxrjyjogzqawgrgcqzogufdthxie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162324.369608-196-34252599896042/AnsiballZ_command.py'
Jan 23 09:58:44 compute-0 sudo[130291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:45 compute-0 python3.9[130293]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.79a2arj3' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:58:45 compute-0 sudo[130291]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:45 compute-0 ceph-mon[74335]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:45 compute-0 sudo[130446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnwyuxyiuxisnksqcefnimrgbuddfueu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162325.2960281-220-256267111202570/AnsiballZ_file.py'
Jan 23 09:58:45 compute-0 sudo[130446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:45 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:45 compute-0 python3.9[130448]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.79a2arj3 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:45 compute-0 sudo[130446]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:46 compute-0 sshd-session[129067]: Connection closed by 192.168.122.30 port 52444
Jan 23 09:58:46 compute-0 sshd-session[129064]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:58:46 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 23 09:58:46 compute-0 systemd[1]: session-45.scope: Consumed 5.327s CPU time.
Jan 23 09:58:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:46 compute-0 systemd-logind[784]: Session 45 logged out. Waiting for processes to exit.
Jan 23 09:58:46 compute-0 systemd-logind[784]: Removed session 45.
Jan 23 09:58:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:46.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:58:46.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:58:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:47 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:58:48 compute-0 ceph-mon[74335]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:48.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:48.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:49 compute-0 ceph-mon[74335]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:58:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:49 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:49] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Jan 23 09:58:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:49] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Jan 23 09:58:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:58:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:58:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:58:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:58:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:58:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:58:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:58:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:58:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:50.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:50.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:58:51 compute-0 ceph-mon[74335]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:51 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 09:58:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:52.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:52 compute-0 ceph-mon[74335]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 09:58:53 compute-0 sshd-session[130482]: Accepted publickey for zuul from 192.168.122.30 port 47092 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:58:53 compute-0 systemd-logind[784]: New session 46 of user zuul.
Jan 23 09:58:53 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 23 09:58:53 compute-0 sshd-session[130482]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:58:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:53 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:54.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:58:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:54.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:58:54 compute-0 python3.9[130637]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:58:55 compute-0 ceph-mon[74335]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:55 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:55 compute-0 sudo[130792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzaboshwbpbkerntgkcfxuocdvxxaqgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162335.2139137-51-182368780519526/AnsiballZ_systemd.py'
Jan 23 09:58:55 compute-0 sudo[130792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:56 compute-0 python3.9[130794]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 09:58:56 compute-0 sudo[130792]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:56.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:56.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:56 compute-0 sudo[130947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihuojfoxhvuwqxmoljdxyoutafaobvno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162336.4948156-75-167129238471251/AnsiballZ_systemd.py'
Jan 23 09:58:56 compute-0 sudo[130947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:58:56.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:58:57 compute-0 python3.9[130949]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 09:58:57 compute-0 sudo[130947]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:58:57 compute-0 sudo[131052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:58:57 compute-0 sudo[131052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:58:57 compute-0 sudo[131052]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:57 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:57 compute-0 ceph-mon[74335]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:58:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095857 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:58:57 compute-0 sudo[131127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reoppksuwnqtazkovsdgrzcxdxzejnjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162337.4669487-102-95649989909327/AnsiballZ_command.py'
Jan 23 09:58:57 compute-0 sudo[131127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 09:58:58 compute-0 python3.9[131129]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:58:58 compute-0 sudo[131127]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:58:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:58.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:58:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:58:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:58:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:58.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:58:58 compute-0 sudo[131281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehnhwdsndmqfjolrgjuhowdrpksggzqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162338.3073368-126-23647479465588/AnsiballZ_stat.py'
Jan 23 09:58:58 compute-0 sudo[131281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:58 compute-0 python3.9[131283]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:58:58 compute-0 sudo[131281]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:59 compute-0 ceph-mon[74335]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 09:58:59 compute-0 sudo[131433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoqungfbamjhzplrjxqpzxudsuunnmut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162339.1892643-153-56743251470951/AnsiballZ_file.py'
Jan 23 09:58:59 compute-0 sudo[131433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:58:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:58:59 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:58:59 compute-0 python3.9[131435]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:58:59 compute-0 sudo[131433]: pam_unix(sudo:session): session closed for user root
Jan 23 09:58:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:58:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:59] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Jan 23 09:58:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:58:59] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Jan 23 09:59:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:00 compute-0 sshd-session[130486]: Connection closed by 192.168.122.30 port 47092
Jan 23 09:59:00 compute-0 sshd-session[130482]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:59:00 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 23 09:59:00 compute-0 systemd[1]: session-46.scope: Consumed 4.102s CPU time.
Jan 23 09:59:00 compute-0 systemd-logind[784]: Session 46 logged out. Waiting for processes to exit.
Jan 23 09:59:00 compute-0 systemd-logind[784]: Removed session 46.
Jan 23 09:59:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:00.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:01 compute-0 ceph-mon[74335]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:59:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:01 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:59:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:59:03 compute-0 ceph-mon[74335]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:03 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:59:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:04.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:59:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:04.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:59:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:59:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:59:05 compute-0 ceph-mon[74335]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:59:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:59:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:05 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:59:06 compute-0 sshd-session[131467]: Accepted publickey for zuul from 192.168.122.30 port 51662 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:59:06 compute-0 systemd-logind[784]: New session 47 of user zuul.
Jan 23 09:59:06 compute-0 systemd[1]: Started Session 47 of User zuul.
Jan 23 09:59:06 compute-0 sshd-session[131467]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:59:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:59:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:59:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:06.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:59:06.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 09:59:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:59:06.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 09:59:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:59:06.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 09:59:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:07 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:59:07 compute-0 python3.9[131621]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:59:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:07 compute-0 ceph-mon[74335]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:59:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:07 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:07 compute-0 sudo[131776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enhdcrxevntqllgonqheiaowonktkmom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162347.703471-57-238083907444602/AnsiballZ_setup.py'
Jan 23 09:59:07 compute-0 sudo[131776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:08 compute-0 python3.9[131778]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 09:59:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:59:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:08.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:59:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:59:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:08.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:59:08 compute-0 sudo[131776]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:08 compute-0 ceph-mon[74335]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:09 compute-0 sudo[131861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsgiqzxsxtnxleequfkgapjbxdjpcrjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162347.703471-57-238083907444602/AnsiballZ_dnf.py'
Jan 23 09:59:09 compute-0 sudo[131861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:09 compute-0 python3.9[131863]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 09:59:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:09 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:09] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:59:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:09] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:59:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:59:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:59:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:10.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:59:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:59:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:10.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:59:10 compute-0 sudo[131861]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:11 compute-0 ceph-mon[74335]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:11 compute-0 python3.9[132016]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 09:59:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:11 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:12.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:59:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:12.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:59:12 compute-0 ceph-mon[74335]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:13 compute-0 python3.9[132169]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 09:59:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:13 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:13 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 09:59:13 compute-0 python3.9[132320]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:59:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:14.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:59:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:14.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:59:14 compute-0 python3.9[132471]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 09:59:15 compute-0 ceph-mon[74335]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:15 compute-0 sshd-session[131471]: Connection closed by 192.168.122.30 port 51662
Jan 23 09:59:15 compute-0 sshd-session[131467]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:59:15 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Jan 23 09:59:15 compute-0 systemd[1]: session-47.scope: Consumed 6.395s CPU time.
Jan 23 09:59:15 compute-0 systemd-logind[784]: Session 47 logged out. Waiting for processes to exit.
Jan 23 09:59:15 compute-0 systemd-logind[784]: Removed session 47.
Jan 23 09:59:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:15 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400a980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:16.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:59:16.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:59:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:17 compute-0 ceph-mon[74335]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:17 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:17 compute-0 sudo[132500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:59:17 compute-0 sudo[132500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:17 compute-0 sudo[132500]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:59:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:59:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:18.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:59:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80001fd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:59:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:59:18 compute-0 ceph-mon[74335]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 09:59:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:19 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095919 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:59:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_09:59:19
Jan 23 09:59:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 09:59:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 09:59:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['cephfs.cephfs.data', '.nfs', 'images', 'volumes', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 23 09:59:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 09:59:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:19] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:59:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:19] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Jan 23 09:59:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:59:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 09:59:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 09:59:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:20.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:20.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:59:21 compute-0 ceph-mon[74335]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:21 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80001fd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:22.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:22 compute-0 sshd-session[132530]: Accepted publickey for zuul from 192.168.122.30 port 53646 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 09:59:22 compute-0 systemd-logind[784]: New session 48 of user zuul.
Jan 23 09:59:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:59:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:22.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:59:22 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 23 09:59:22 compute-0 sshd-session[132530]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 09:59:22 compute-0 ceph-mon[74335]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 09:59:23 compute-0 python3.9[132683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 09:59:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:23 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80001fd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:24.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70000d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:24.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:25 compute-0 sudo[132840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zccpuialwfwxerkwiptbdkeqvmxmvqry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162364.615818-106-122738945841224/AnsiballZ_file.py'
Jan 23 09:59:25 compute-0 sudo[132840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:25 compute-0 python3.9[132842]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:25 compute-0 sudo[132840]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:25 compute-0 ceph-mon[74335]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:25 compute-0 sudo[132993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idwfzowhjozvmmlxvtxsqyhzytrxslhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162365.4508562-106-195374678461915/AnsiballZ_file.py'
Jan 23 09:59:25 compute-0 sudo[132993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:25 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:25 compute-0 python3.9[132995]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:25 compute-0 sudo[132993]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 09:59:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 09:59:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80002a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:26.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:26 compute-0 sudo[133146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlhncpfzsphkqpudsqnhkqskgpjmaypm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162366.1492991-151-243684945726995/AnsiballZ_stat.py'
Jan 23 09:59:26 compute-0 sudo[133146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:26 compute-0 ceph-mon[74335]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:26 compute-0 python3.9[133148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:26 compute-0 sudo[133146]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:59:26.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:59:27 compute-0 sudo[133269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keufuermycptnqqvgrprrtjqcriovqnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162366.1492991-151-243684945726995/AnsiballZ_copy.py'
Jan 23 09:59:27 compute-0 sudo[133269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:27 compute-0 python3.9[133271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162366.1492991-151-243684945726995/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=00db5fefee2c4a6114dad8af0d0955c55e759bea backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:27 compute-0 sudo[133269]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:27 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:27 compute-0 sudo[133422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdeccpycuguqogpowcypjuumflnesgku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162367.6819992-151-93018900214813/AnsiballZ_stat.py'
Jan 23 09:59:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:27 compute-0 sudo[133422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:28 compute-0 python3.9[133424]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:28 compute-0 sudo[133422]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:28.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:28.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:28 compute-0 sudo[133546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpvnthavuczndogjrtngmvgzaykhcvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162367.6819992-151-93018900214813/AnsiballZ_copy.py'
Jan 23 09:59:28 compute-0 sudo[133546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:28 compute-0 python3.9[133548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162367.6819992-151-93018900214813/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ff17d6d1438a69ae92e7570d79b66fb807ae4885 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:28 compute-0 sudo[133546]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:29 compute-0 ceph-mon[74335]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:29 compute-0 sudo[133698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzywmplghcqhdmuwnfmklngnxdoitkft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162368.930202-151-188608410956267/AnsiballZ_stat.py'
Jan 23 09:59:29 compute-0 sudo[133698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:29 compute-0 python3.9[133700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:29 compute-0 sudo[133698]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:29 compute-0 sudo[133822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpmhvejaifpxpayftujughnojkzbzvfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162368.930202-151-188608410956267/AnsiballZ_copy.py'
Jan 23 09:59:29 compute-0 sudo[133822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:29 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80002a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:59:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:29] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 09:59:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:29] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 09:59:30 compute-0 python3.9[133824]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162368.930202-151-188608410956267/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=96e55d1a12ff75de7ce45a74bb6829544f4e6fc4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:30 compute-0 sudo[133822]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.346576) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162370347265, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1053, "num_deletes": 251, "total_data_size": 1926620, "memory_usage": 1963112, "flush_reason": "Manual Compaction"}
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162370365899, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1881107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12213, "largest_seqno": 13265, "table_properties": {"data_size": 1876037, "index_size": 2594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10506, "raw_average_key_size": 19, "raw_value_size": 1865914, "raw_average_value_size": 3386, "num_data_blocks": 116, "num_entries": 551, "num_filter_entries": 551, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162273, "oldest_key_time": 1769162273, "file_creation_time": 1769162370, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 19329 microseconds, and 10265 cpu microseconds.
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.366142) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1881107 bytes OK
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.366214) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.368544) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.368586) EVENT_LOG_v1 {"time_micros": 1769162370368579, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.368608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1921854, prev total WAL file size 1921854, number of live WAL files 2.
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.371210) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1837KB)], [29(12MB)]
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162370371512, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15336175, "oldest_snapshot_seqno": -1}
Jan 23 09:59:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:59:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:30.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:59:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:30.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4277 keys, 13223510 bytes, temperature: kUnknown
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162370499012, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 13223510, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13192031, "index_size": 19657, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 109365, "raw_average_key_size": 25, "raw_value_size": 13110970, "raw_average_value_size": 3065, "num_data_blocks": 828, "num_entries": 4277, "num_filter_entries": 4277, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769162370, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.499408) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13223510 bytes
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.501611) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.2 rd, 103.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 4793, records dropped: 516 output_compression: NoCompression
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.501637) EVENT_LOG_v1 {"time_micros": 1769162370501625, "job": 12, "event": "compaction_finished", "compaction_time_micros": 127594, "compaction_time_cpu_micros": 49655, "output_level": 6, "num_output_files": 1, "total_output_size": 13223510, "num_input_records": 4793, "num_output_records": 4277, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162370502190, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162370505208, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.370828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.505325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.505338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.505340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.505342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:59:30 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-09:59:30.505344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 09:59:30 compute-0 sudo[133975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etsakbgpvmujuydoddsfgqflkwdzjtkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162370.3275008-290-199456798736544/AnsiballZ_file.py'
Jan 23 09:59:30 compute-0 sudo[133975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:30 compute-0 python3.9[133977]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:30 compute-0 sudo[133975]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:30 compute-0 sshd-session[133978]: Connection closed by 80.94.92.168 port 34092
Jan 23 09:59:31 compute-0 ceph-mon[74335]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:59:31 compute-0 sudo[134128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaafupujuhtvrdaluxhrkdngdolgpzbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162371.0875483-290-100101654488158/AnsiballZ_file.py'
Jan 23 09:59:31 compute-0 sudo[134128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:31 compute-0 python3.9[134130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:31 compute-0 sudo[134128]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:31 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:59:32 compute-0 sudo[134282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slqgakvdckgaxpzrqwhnibbwidntfzzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162371.8633974-336-11960493665728/AnsiballZ_stat.py'
Jan 23 09:59:32 compute-0 sudo[134282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80002a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:32.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:32 compute-0 python3.9[134284]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:32 compute-0 sudo[134282]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:32.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:32 compute-0 sudo[134405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndjaqlgqmgfdlsaypekohgmmlypcrbof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162371.8633974-336-11960493665728/AnsiballZ_copy.py'
Jan 23 09:59:32 compute-0 sudo[134405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:33 compute-0 python3.9[134407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162371.8633974-336-11960493665728/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a6c9daed5a2b2e5d5d954a6f39509facf669228f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:33 compute-0 sudo[134405]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:33 compute-0 ceph-mon[74335]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 09:59:33 compute-0 sudo[134557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvmxbuaxttbnqblzzzkffmksdkubxerk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162373.3156059-336-277559731578332/AnsiballZ_stat.py'
Jan 23 09:59:33 compute-0 sudo[134557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:33 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:33 compute-0 python3.9[134559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:33 compute-0 sudo[134557]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:34 compute-0 sudo[134682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uavglajapbjnlducudunyfpwqfmstdpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162373.3156059-336-277559731578332/AnsiballZ_copy.py'
Jan 23 09:59:34 compute-0 sudo[134682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:34.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:34 compute-0 python3.9[134684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162373.3156059-336-277559731578332/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7ea5769d722c11e7459792c631f886a53fdd1360 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:34 compute-0 sudo[134682]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:34 compute-0 sudo[134834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmrntkrlpbperdxjiblnbpqqlkkovmjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162374.6735506-336-199883449552900/AnsiballZ_stat.py'
Jan 23 09:59:34 compute-0 sudo[134834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:59:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:59:35 compute-0 python3.9[134836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:35 compute-0 sudo[134834]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:35 compute-0 ceph-mon[74335]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:35 compute-0 sudo[134957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acjhxuovugjocupamtqnbgmfzbasrkne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162374.6735506-336-199883449552900/AnsiballZ_copy.py'
Jan 23 09:59:35 compute-0 sudo[134957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:35 compute-0 python3.9[134959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162374.6735506-336-199883449552900/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=672638ba70613ea9068dc4470bccd4cfb1833726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:35 compute-0 sudo[134957]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:35 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:36 compute-0 sudo[135082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:59:36 compute-0 sudo[135082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:36 compute-0 sudo[135082]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:36 compute-0 sudo[135140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vweokwyjuzchmkcgbbmybpqwqfovdqwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162376.057401-476-151337648015159/AnsiballZ_file.py'
Jan 23 09:59:36 compute-0 sudo[135140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:36 compute-0 sudo[135134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 09:59:36 compute-0 sudo[135134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:36.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:36 compute-0 python3.9[135159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:36 compute-0 sudo[135140]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 09:59:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:59:36 compute-0 ceph-mon[74335]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:36 compute-0 sudo[135134]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:59:36.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:59:37 compute-0 sudo[135344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwmxnbnjlsykxdkdgwyjpglbptaxnsdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162376.846414-476-80866313867660/AnsiballZ_file.py'
Jan 23 09:59:37 compute-0 sudo[135344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:37 compute-0 python3.9[135346]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 09:59:37 compute-0 sudo[135344]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:37 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:37 compute-0 sudo[135497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piyraxznpwkfhtextxydhtqewgohygxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162377.591514-530-210745891499411/AnsiballZ_stat.py'
Jan 23 09:59:37 compute-0 sudo[135497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095937 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:59:37 compute-0 sudo[135499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:59:37 compute-0 sudo[135499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:37 compute-0 sudo[135499]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:59:38 compute-0 python3.9[135500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:38 compute-0 sudo[135497]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:59:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:59:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 09:59:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:59:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 09:59:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:38 compute-0 sudo[135646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrtwlonavvqpgsiyacaexcssatmyjcxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162377.591514-530-210745891499411/AnsiballZ_copy.py'
Jan 23 09:59:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:59:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:38.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:59:38 compute-0 sudo[135646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:38 compute-0 python3.9[135648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162377.591514-530-210745891499411/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b05b657a5f8faa2b40a7fea08f9e62839ac74cf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 09:59:38 compute-0 sudo[135646]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 09:59:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:59:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 09:59:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:59:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 09:59:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:59:38 compute-0 sudo[135714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:59:38 compute-0 sudo[135714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:38 compute-0 sudo[135714]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:39 compute-0 sudo[135758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 09:59:39 compute-0 sudo[135758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:39 compute-0 sudo[135848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txfkyzmlkqwchgloalrnmoqhxcplheva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162378.8704407-530-27214146362211/AnsiballZ_stat.py'
Jan 23 09:59:39 compute-0 sudo[135848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:39 compute-0 python3.9[135850]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:39 compute-0 sudo[135848]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:39 compute-0 podman[135890]: 2026-01-23 09:59:39.515521981 +0000 UTC m=+0.056589672 container create 2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mahavira, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:59:39 compute-0 systemd[1]: Started libpod-conmon-2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd.scope.
Jan 23 09:59:39 compute-0 podman[135890]: 2026-01-23 09:59:39.486464369 +0000 UTC m=+0.027532080 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:59:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:59:39 compute-0 podman[135890]: 2026-01-23 09:59:39.617650042 +0000 UTC m=+0.158717773 container init 2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mahavira, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:59:39 compute-0 podman[135890]: 2026-01-23 09:59:39.625183585 +0000 UTC m=+0.166251276 container start 2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Jan 23 09:59:39 compute-0 podman[135890]: 2026-01-23 09:59:39.629109976 +0000 UTC m=+0.170177697 container attach 2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mahavira, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:59:39 compute-0 systemd[1]: libpod-2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd.scope: Deactivated successfully.
Jan 23 09:59:39 compute-0 determined_mahavira[135950]: 167 167
Jan 23 09:59:39 compute-0 conmon[135950]: conmon 2bb700bdf3848f7b7d85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd.scope/container/memory.events
Jan 23 09:59:39 compute-0 podman[135890]: 2026-01-23 09:59:39.634600602 +0000 UTC m=+0.175668303 container died 2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Jan 23 09:59:39 compute-0 ceph-mon[74335]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 09:59:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:59:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 09:59:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 09:59:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 09:59:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 09:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd97a9f9c12438e0187f837f72769e0d36c94d5f9520ab2d320128f4f6e722f1-merged.mount: Deactivated successfully.
Jan 23 09:59:39 compute-0 podman[135890]: 2026-01-23 09:59:39.688175028 +0000 UTC m=+0.229242719 container remove 2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mahavira, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 09:59:39 compute-0 systemd[1]: libpod-conmon-2bb700bdf3848f7b7d85cce47ce9a6114221a6472149c20f95210daa1bbd94cd.scope: Deactivated successfully.
Jan 23 09:59:39 compute-0 sudo[136048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-admyvnimxkdhnuzlsokvjtqutafeffbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162378.8704407-530-27214146362211/AnsiballZ_copy.py'
Jan 23 09:59:39 compute-0 sudo[136048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:39 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:39 compute-0 podman[136051]: 2026-01-23 09:59:39.851455289 +0000 UTC m=+0.027877570 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:59:39 compute-0 podman[136051]: 2026-01-23 09:59:39.95925671 +0000 UTC m=+0.135678991 container create 206890d6ccbec3dee10ec4c7d377b7e44efd585d7677e8a019d579ec02f6cfd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_kare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 09:59:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:39] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 09:59:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:39] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 09:59:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:40 compute-0 systemd[1]: Started libpod-conmon-206890d6ccbec3dee10ec4c7d377b7e44efd585d7677e8a019d579ec02f6cfd7.scope.
Jan 23 09:59:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:59:40 compute-0 python3.9[136056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162378.8704407-530-27214146362211/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7ea5769d722c11e7459792c631f886a53fdd1360 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cd2edfd35f0936c70ea1c5865c58d429d975228c86d714065518cc30f48eb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cd2edfd35f0936c70ea1c5865c58d429d975228c86d714065518cc30f48eb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cd2edfd35f0936c70ea1c5865c58d429d975228c86d714065518cc30f48eb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cd2edfd35f0936c70ea1c5865c58d429d975228c86d714065518cc30f48eb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cd2edfd35f0936c70ea1c5865c58d429d975228c86d714065518cc30f48eb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:40 compute-0 podman[136051]: 2026-01-23 09:59:40.080630765 +0000 UTC m=+0.257053046 container init 206890d6ccbec3dee10ec4c7d377b7e44efd585d7677e8a019d579ec02f6cfd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 09:59:40 compute-0 sudo[136048]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:40 compute-0 podman[136051]: 2026-01-23 09:59:40.089465455 +0000 UTC m=+0.265887716 container start 206890d6ccbec3dee10ec4c7d377b7e44efd585d7677e8a019d579ec02f6cfd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 09:59:40 compute-0 podman[136051]: 2026-01-23 09:59:40.097525213 +0000 UTC m=+0.273947504 container attach 206890d6ccbec3dee10ec4c7d377b7e44efd585d7677e8a019d579ec02f6cfd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_kare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 09:59:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:40.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:40.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:40 compute-0 condescending_kare[136069]: --> passed data devices: 0 physical, 1 LVM
Jan 23 09:59:40 compute-0 condescending_kare[136069]: --> All data devices are unavailable
Jan 23 09:59:40 compute-0 systemd[1]: libpod-206890d6ccbec3dee10ec4c7d377b7e44efd585d7677e8a019d579ec02f6cfd7.scope: Deactivated successfully.
Jan 23 09:59:40 compute-0 podman[136051]: 2026-01-23 09:59:40.535440176 +0000 UTC m=+0.711862447 container died 206890d6ccbec3dee10ec4c7d377b7e44efd585d7677e8a019d579ec02f6cfd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:59:40 compute-0 sudo[136233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sobsupyddtuawagjqsnjesnatqfijvxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162380.2292876-530-58255390527424/AnsiballZ_stat.py'
Jan 23 09:59:40 compute-0 sudo[136233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5cd2edfd35f0936c70ea1c5865c58d429d975228c86d714065518cc30f48eb7-merged.mount: Deactivated successfully.
Jan 23 09:59:40 compute-0 podman[136051]: 2026-01-23 09:59:40.591271516 +0000 UTC m=+0.767693777 container remove 206890d6ccbec3dee10ec4c7d377b7e44efd585d7677e8a019d579ec02f6cfd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_kare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:59:40 compute-0 systemd[1]: libpod-conmon-206890d6ccbec3dee10ec4c7d377b7e44efd585d7677e8a019d579ec02f6cfd7.scope: Deactivated successfully.
Jan 23 09:59:40 compute-0 sudo[135758]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:40 compute-0 sudo[136246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:59:40 compute-0 sudo[136246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:40 compute-0 sudo[136246]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:40 compute-0 python3.9[136242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:40 compute-0 sudo[136233]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:40 compute-0 sudo[136271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 09:59:40 compute-0 sudo[136271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:40 compute-0 ceph-mon[74335]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:41 compute-0 sudo[136456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cccqgnabjucluqwwupkbbmgceofeofsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162380.2292876-530-58255390527424/AnsiballZ_copy.py'
Jan 23 09:59:41 compute-0 sudo[136456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:41 compute-0 podman[136461]: 2026-01-23 09:59:41.246999834 +0000 UTC m=+0.024452654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:59:41 compute-0 python3.9[136459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162380.2292876-530-58255390527424/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=aa467cb6b087024f290169ecc83308fae0a8d45a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:41 compute-0 sudo[136456]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:41 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 09:59:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 7205 writes, 30K keys, 7205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 7205 writes, 1228 syncs, 5.87 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7205 writes, 30K keys, 7205 commit groups, 1.0 writes per commit group, ingest: 20.49 MB, 0.03 MB/s
                                           Interval WAL: 7205 writes, 1228 syncs, 5.87 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 09:59:41 compute-0 podman[136461]: 2026-01-23 09:59:41.894009776 +0000 UTC m=+0.671462575 container create 808129b1b2e4a1a147b560af7ccbd097bbb5f4efd326b2a0480394276d41c9a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_galileo, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 09:59:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:42 compute-0 systemd[1]: Started libpod-conmon-808129b1b2e4a1a147b560af7ccbd097bbb5f4efd326b2a0480394276d41c9a3.scope.
Jan 23 09:59:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:59:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:42.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:42.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:42 compute-0 podman[136461]: 2026-01-23 09:59:42.489117058 +0000 UTC m=+1.266569867 container init 808129b1b2e4a1a147b560af7ccbd097bbb5f4efd326b2a0480394276d41c9a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 09:59:42 compute-0 podman[136461]: 2026-01-23 09:59:42.496323682 +0000 UTC m=+1.273776461 container start 808129b1b2e4a1a147b560af7ccbd097bbb5f4efd326b2a0480394276d41c9a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_galileo, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:59:42 compute-0 sudo[136631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbdxetwxnwgpeujzxeyuqniwtzcbmcxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162382.2212765-717-86233682718567/AnsiballZ_file.py'
Jan 23 09:59:42 compute-0 compassionate_galileo[136540]: 167 167
Jan 23 09:59:42 compute-0 systemd[1]: libpod-808129b1b2e4a1a147b560af7ccbd097bbb5f4efd326b2a0480394276d41c9a3.scope: Deactivated successfully.
Jan 23 09:59:42 compute-0 sudo[136631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:42 compute-0 podman[136461]: 2026-01-23 09:59:42.588775299 +0000 UTC m=+1.366228098 container attach 808129b1b2e4a1a147b560af7ccbd097bbb5f4efd326b2a0480394276d41c9a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_galileo, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 09:59:42 compute-0 podman[136461]: 2026-01-23 09:59:42.589961492 +0000 UTC m=+1.367414281 container died 808129b1b2e4a1a147b560af7ccbd097bbb5f4efd326b2a0480394276d41c9a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:59:42 compute-0 python3.9[136636]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:42 compute-0 sudo[136631]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfaca236e83915c6e5611db594555467e7fb836e765236a4493447e1ed86158c-merged.mount: Deactivated successfully.
Jan 23 09:59:43 compute-0 sudo[136797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzglotfuehufilrvzopkpsujxlnlrchi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162382.9531739-743-233378428333409/AnsiballZ_stat.py'
Jan 23 09:59:43 compute-0 sudo[136797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:43 compute-0 python3.9[136799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:43 compute-0 sudo[136797]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:43 compute-0 podman[136461]: 2026-01-23 09:59:43.624697656 +0000 UTC m=+2.402150445 container remove 808129b1b2e4a1a147b560af7ccbd097bbb5f4efd326b2a0480394276d41c9a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Jan 23 09:59:43 compute-0 systemd[1]: libpod-conmon-808129b1b2e4a1a147b560af7ccbd097bbb5f4efd326b2a0480394276d41c9a3.scope: Deactivated successfully.
Jan 23 09:59:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:43 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70003a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:43 compute-0 sudo[136944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xboxzucuhnczkbeqpkkxmoaqsotmhtgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162382.9531739-743-233378428333409/AnsiballZ_copy.py'
Jan 23 09:59:43 compute-0 sudo[136944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:43 compute-0 podman[136880]: 2026-01-23 09:59:43.798815744 +0000 UTC m=+0.027871970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:59:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:59:44 compute-0 ceph-mon[74335]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 09:59:44 compute-0 python3.9[136946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162382.9531739-743-233378428333409/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=022ad0c65ad9b9ad4d20c21b3609f531109c55bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:44 compute-0 podman[136880]: 2026-01-23 09:59:44.093801212 +0000 UTC m=+0.322857408 container create 55a2a14a9eb120b972dee100d8723f9eecf69a54dda0f0af1604be4e981ac907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:59:44 compute-0 sudo[136944]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:44 compute-0 systemd[1]: Started libpod-conmon-55a2a14a9eb120b972dee100d8723f9eecf69a54dda0f0af1604be4e981ac907.scope.
Jan 23 09:59:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:44.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf42ce6cc1647e30268d925d87cb2ee1880cf53a415ca80ae39bd6ddc96d15d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf42ce6cc1647e30268d925d87cb2ee1880cf53a415ca80ae39bd6ddc96d15d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf42ce6cc1647e30268d925d87cb2ee1880cf53a415ca80ae39bd6ddc96d15d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf42ce6cc1647e30268d925d87cb2ee1880cf53a415ca80ae39bd6ddc96d15d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:44.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:44 compute-0 sudo[137102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asvzglyotfbscgghmgtwryxlzhotfdwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162384.3151486-786-155221494237100/AnsiballZ_file.py'
Jan 23 09:59:44 compute-0 sudo[137102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:44 compute-0 podman[136880]: 2026-01-23 09:59:44.712808521 +0000 UTC m=+0.941864747 container init 55a2a14a9eb120b972dee100d8723f9eecf69a54dda0f0af1604be4e981ac907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 09:59:44 compute-0 podman[136880]: 2026-01-23 09:59:44.724618785 +0000 UTC m=+0.953674991 container start 55a2a14a9eb120b972dee100d8723f9eecf69a54dda0f0af1604be4e981ac907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:59:44 compute-0 podman[136880]: 2026-01-23 09:59:44.729241386 +0000 UTC m=+0.958297592 container attach 55a2a14a9eb120b972dee100d8723f9eecf69a54dda0f0af1604be4e981ac907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 23 09:59:44 compute-0 python3.9[137104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:44 compute-0 sudo[137102]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:45 compute-0 serene_black[137030]: {
Jan 23 09:59:45 compute-0 serene_black[137030]:     "1": [
Jan 23 09:59:45 compute-0 serene_black[137030]:         {
Jan 23 09:59:45 compute-0 serene_black[137030]:             "devices": [
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "/dev/loop3"
Jan 23 09:59:45 compute-0 serene_black[137030]:             ],
Jan 23 09:59:45 compute-0 serene_black[137030]:             "lv_name": "ceph_lv0",
Jan 23 09:59:45 compute-0 serene_black[137030]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:59:45 compute-0 serene_black[137030]:             "lv_size": "21470642176",
Jan 23 09:59:45 compute-0 serene_black[137030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 09:59:45 compute-0 serene_black[137030]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:59:45 compute-0 serene_black[137030]:             "name": "ceph_lv0",
Jan 23 09:59:45 compute-0 serene_black[137030]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:59:45 compute-0 serene_black[137030]:             "tags": {
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.cluster_name": "ceph",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.crush_device_class": "",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.encrypted": "0",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.osd_id": "1",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.type": "block",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.vdo": "0",
Jan 23 09:59:45 compute-0 serene_black[137030]:                 "ceph.with_tpm": "0"
Jan 23 09:59:45 compute-0 serene_black[137030]:             },
Jan 23 09:59:45 compute-0 serene_black[137030]:             "type": "block",
Jan 23 09:59:45 compute-0 serene_black[137030]:             "vg_name": "ceph_vg0"
Jan 23 09:59:45 compute-0 serene_black[137030]:         }
Jan 23 09:59:45 compute-0 serene_black[137030]:     ]
Jan 23 09:59:45 compute-0 serene_black[137030]: }
Jan 23 09:59:45 compute-0 systemd[1]: libpod-55a2a14a9eb120b972dee100d8723f9eecf69a54dda0f0af1604be4e981ac907.scope: Deactivated successfully.
Jan 23 09:59:45 compute-0 podman[136880]: 2026-01-23 09:59:45.074477027 +0000 UTC m=+1.303533223 container died 55a2a14a9eb120b972dee100d8723f9eecf69a54dda0f0af1604be4e981ac907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 09:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf42ce6cc1647e30268d925d87cb2ee1880cf53a415ca80ae39bd6ddc96d15d-merged.mount: Deactivated successfully.
Jan 23 09:59:45 compute-0 podman[136880]: 2026-01-23 09:59:45.452406173 +0000 UTC m=+1.681462379 container remove 55a2a14a9eb120b972dee100d8723f9eecf69a54dda0f0af1604be4e981ac907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 09:59:45 compute-0 systemd[1]: libpod-conmon-55a2a14a9eb120b972dee100d8723f9eecf69a54dda0f0af1604be4e981ac907.scope: Deactivated successfully.
Jan 23 09:59:45 compute-0 sudo[136271]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:45 compute-0 sudo[137272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztqdsezgfczoyjznlohvjtkczcrdyjrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162385.0523567-810-264671725874603/AnsiballZ_stat.py'
Jan 23 09:59:45 compute-0 sudo[137272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:45 compute-0 sudo[137271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 09:59:45 compute-0 sudo[137271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:45 compute-0 sudo[137271]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:45 compute-0 sudo[137299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 09:59:45 compute-0 sudo[137299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:45 compute-0 python3.9[137286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:45 compute-0 sudo[137272]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:45 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:59:46 compute-0 podman[137436]: 2026-01-23 09:59:46.106397411 +0000 UTC m=+0.025430180 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:59:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70003a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:46 compute-0 sudo[137500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmfudvcekipxiuwwzhhwgniqkxepytn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162385.0523567-810-264671725874603/AnsiballZ_copy.py'
Jan 23 09:59:46 compute-0 sudo[137500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:46 compute-0 ceph-mon[74335]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:59:46 compute-0 podman[137436]: 2026-01-23 09:59:46.255401088 +0000 UTC m=+0.174433837 container create a34c106c60f056154e80c5cb751a19f47b109e5d4622c924a1444a0155b8cc1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 09:59:46 compute-0 systemd[1]: Started libpod-conmon-a34c106c60f056154e80c5cb751a19f47b109e5d4622c924a1444a0155b8cc1e.scope.
Jan 23 09:59:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:59:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:46.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:46 compute-0 python3.9[137502]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162385.0523567-810-264671725874603/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=022ad0c65ad9b9ad4d20c21b3609f531109c55bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:46.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:46 compute-0 sudo[137500]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:46 compute-0 podman[137436]: 2026-01-23 09:59:46.522190148 +0000 UTC m=+0.441222917 container init a34c106c60f056154e80c5cb751a19f47b109e5d4622c924a1444a0155b8cc1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_varahamihira, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 09:59:46 compute-0 podman[137436]: 2026-01-23 09:59:46.531608484 +0000 UTC m=+0.450641233 container start a34c106c60f056154e80c5cb751a19f47b109e5d4622c924a1444a0155b8cc1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_varahamihira, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:59:46 compute-0 tender_varahamihira[137505]: 167 167
Jan 23 09:59:46 compute-0 systemd[1]: libpod-a34c106c60f056154e80c5cb751a19f47b109e5d4622c924a1444a0155b8cc1e.scope: Deactivated successfully.
Jan 23 09:59:46 compute-0 podman[137436]: 2026-01-23 09:59:46.565655108 +0000 UTC m=+0.484687887 container attach a34c106c60f056154e80c5cb751a19f47b109e5d4622c924a1444a0155b8cc1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 09:59:46 compute-0 podman[137436]: 2026-01-23 09:59:46.566384439 +0000 UTC m=+0.485417208 container died a34c106c60f056154e80c5cb751a19f47b109e5d4622c924a1444a0155b8cc1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 23 09:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c50a2819e735d24b3c1018a03ef138514fc9945288424fda6d7ca62069266bb-merged.mount: Deactivated successfully.
Jan 23 09:59:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:59:46.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 09:59:47 compute-0 sudo[137670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqnhlldyavplrhvqhslgnaafzqhcxrmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162386.7318141-869-92005065398960/AnsiballZ_file.py'
Jan 23 09:59:47 compute-0 sudo[137670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:47 compute-0 podman[137436]: 2026-01-23 09:59:47.208617475 +0000 UTC m=+1.127650214 container remove a34c106c60f056154e80c5cb751a19f47b109e5d4622c924a1444a0155b8cc1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_varahamihira, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 09:59:47 compute-0 python3.9[137672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:47 compute-0 sudo[137670]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:47 compute-0 systemd[1]: libpod-conmon-a34c106c60f056154e80c5cb751a19f47b109e5d4622c924a1444a0155b8cc1e.scope: Deactivated successfully.
Jan 23 09:59:47 compute-0 podman[137703]: 2026-01-23 09:59:47.365975338 +0000 UTC m=+0.028153408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 09:59:47 compute-0 sudo[137843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqppjwekstrncblpvdrcxursaghqapux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162387.427544-895-220828076439490/AnsiballZ_stat.py'
Jan 23 09:59:47 compute-0 sudo[137843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:47 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0044c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:48 compute-0 python3.9[137845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:48 compute-0 sudo[137843]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:48 compute-0 podman[137703]: 2026-01-23 09:59:48.375268833 +0000 UTC m=+1.037446873 container create 730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_blackwell, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:59:48 compute-0 ceph-mon[74335]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 09:59:48 compute-0 systemd[1]: Started libpod-conmon-730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a.scope.
Jan 23 09:59:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:48.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70003a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 09:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfac62d6df53d7ee36edb3bf7bda0ad81af792034ade4018ad994004e76b09f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfac62d6df53d7ee36edb3bf7bda0ad81af792034ade4018ad994004e76b09f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfac62d6df53d7ee36edb3bf7bda0ad81af792034ade4018ad994004e76b09f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfac62d6df53d7ee36edb3bf7bda0ad81af792034ade4018ad994004e76b09f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 09:59:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 09:59:48 compute-0 podman[137703]: 2026-01-23 09:59:48.472821843 +0000 UTC m=+1.134999903 container init 730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 09:59:48 compute-0 podman[137703]: 2026-01-23 09:59:48.482826947 +0000 UTC m=+1.145004987 container start 730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 09:59:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:48.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:48 compute-0 podman[137703]: 2026-01-23 09:59:48.489572707 +0000 UTC m=+1.151750777 container attach 730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 09:59:48 compute-0 sudo[137975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzgjkuhqbcmpffvonhprnaaiccqduihs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162387.427544-895-220828076439490/AnsiballZ_copy.py'
Jan 23 09:59:48 compute-0 sudo[137975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:48 compute-0 python3.9[137977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162387.427544-895-220828076439490/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=022ad0c65ad9b9ad4d20c21b3609f531109c55bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:48 compute-0 sudo[137975]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:49 compute-0 lvm[138146]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 09:59:49 compute-0 lvm[138146]: VG ceph_vg0 finished
Jan 23 09:59:49 compute-0 elastic_blackwell[137920]: {}
Jan 23 09:59:49 compute-0 systemd[1]: libpod-730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a.scope: Deactivated successfully.
Jan 23 09:59:49 compute-0 systemd[1]: libpod-730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a.scope: Consumed 1.296s CPU time.
Jan 23 09:59:49 compute-0 podman[137703]: 2026-01-23 09:59:49.303438291 +0000 UTC m=+1.965616361 container died 730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_blackwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 09:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cfac62d6df53d7ee36edb3bf7bda0ad81af792034ade4018ad994004e76b09f-merged.mount: Deactivated successfully.
Jan 23 09:59:49 compute-0 podman[137703]: 2026-01-23 09:59:49.366776453 +0000 UTC m=+2.028954493 container remove 730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_blackwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 09:59:49 compute-0 sudo[138211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqfcanovgzdwrsluimfxgwfjgrbhtttx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162389.0619364-945-35534990058929/AnsiballZ_file.py'
Jan 23 09:59:49 compute-0 sudo[138211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:49 compute-0 systemd[1]: libpod-conmon-730e072a0f5131da9484e812a949c35a46b35e7869f215ee7eaf00493e8b447a.scope: Deactivated successfully.
Jan 23 09:59:49 compute-0 ceph-mon[74335]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:49 compute-0 sudo[137299]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 09:59:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 09:59:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:49 compute-0 sudo[138214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 09:59:49 compute-0 sudo[138214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:49 compute-0 sudo[138214]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:49 compute-0 python3.9[138213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:49 compute-0 sudo[138211]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:49 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70003a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:49] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 09:59:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:49] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 09:59:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 09:59:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:59:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:59:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:59:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:59:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:59:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 09:59:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 09:59:50 compute-0 sudo[138391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyqymojidwsqjovvygsiksxerhbmxlnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162389.8029022-973-126516255329761/AnsiballZ_stat.py'
Jan 23 09:59:50 compute-0 sudo[138391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:50 compute-0 python3.9[138393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:50 compute-0 sudo[138391]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:50.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:50.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 09:59:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 09:59:50 compute-0 sudo[138514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klbmllppasebxitcjwmbnlcdngumowuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162389.8029022-973-126516255329761/AnsiballZ_copy.py'
Jan 23 09:59:50 compute-0 sudo[138514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:50 compute-0 python3.9[138516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162389.8029022-973-126516255329761/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=022ad0c65ad9b9ad4d20c21b3609f531109c55bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:51 compute-0 sudo[138514]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:51 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 09:59:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:51 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 09:59:51 compute-0 sudo[138666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyxwbshqrenjcfqshzyijlgondaipfzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162391.192268-1022-266820369585313/AnsiballZ_file.py'
Jan 23 09:59:51 compute-0 sudo[138666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:51 compute-0 python3.9[138668]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:51 compute-0 ceph-mon[74335]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 09:59:51 compute-0 sudo[138666]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:51 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 595 B/s wr, 1 op/s
Jan 23 09:59:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70003a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:52 compute-0 sudo[138820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koacejvuxycpmspqhxmccyyhqycjwtig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162392.009689-1050-22879998828827/AnsiballZ_stat.py'
Jan 23 09:59:52 compute-0 sudo[138820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 09:59:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:52.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 09:59:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:52.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:52 compute-0 python3.9[138822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:52 compute-0 sudo[138820]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:52 compute-0 ceph-mon[74335]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 595 B/s wr, 1 op/s
Jan 23 09:59:52 compute-0 sudo[138943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwpwgkwwixevyngznkbeylftwytukazy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162392.009689-1050-22879998828827/AnsiballZ_copy.py'
Jan 23 09:59:52 compute-0 sudo[138943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:53 compute-0 python3.9[138945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162392.009689-1050-22879998828827/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=022ad0c65ad9b9ad4d20c21b3609f531109c55bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:53 compute-0 sudo[138943]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:53 compute-0 sudo[139095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxyddfbelvuguagzgkvjscfezqizgrgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162393.3599052-1090-180563747751823/AnsiballZ_file.py'
Jan 23 09:59:53 compute-0 sudo[139095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:53 compute-0 python3.9[139097]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 09:59:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:53 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:53 compute-0 sudo[139095]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 595 B/s wr, 1 op/s
Jan 23 09:59:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:54 compute-0 sudo[139249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfezpkzflnvjxscduhxbualxgxrmiwyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162393.998657-1098-81631773402605/AnsiballZ_stat.py'
Jan 23 09:59:54 compute-0 sudo[139249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:54 compute-0 python3.9[139251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 09:59:54 compute-0 sudo[139249]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70003a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:54.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:54.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 09:59:54 compute-0 sudo[139372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-funxjaociuqtpmcubjptawotjwmnxzlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162393.998657-1098-81631773402605/AnsiballZ_copy.py'
Jan 23 09:59:54 compute-0 sudo[139372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 09:59:55 compute-0 python3.9[139374]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162393.998657-1098-81631773402605/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=022ad0c65ad9b9ad4d20c21b3609f531109c55bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 09:59:55 compute-0 sudo[139372]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095955 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 09:59:55 compute-0 ceph-mon[74335]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 595 B/s wr, 1 op/s
Jan 23 09:59:55 compute-0 sshd-session[132533]: Connection closed by 192.168.122.30 port 53646
Jan 23 09:59:55 compute-0 sshd-session[132530]: pam_unix(sshd:session): session closed for user zuul
Jan 23 09:59:55 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 23 09:59:55 compute-0 systemd[1]: session-48.scope: Consumed 25.649s CPU time.
Jan 23 09:59:55 compute-0 systemd-logind[784]: Session 48 logged out. Waiting for processes to exit.
Jan 23 09:59:55 compute-0 systemd-logind[784]: Removed session 48.
Jan 23 09:59:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:55 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 595 B/s wr, 1 op/s
Jan 23 09:59:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:56.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c580b5d0 =====
Jan 23 09:59:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c580b5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 09:59:56 compute-0 radosgw[93748]: beast: 0x7fa5c580b5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:56.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 09:59:56 compute-0 ceph-mon[74335]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 595 B/s wr, 1 op/s
Jan 23 09:59:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:59:56.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 09:59:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T09:59:56.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 09:59:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:57 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 936 B/s wr, 3 op/s
Jan 23 09:59:58 compute-0 sudo[139404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 09:59:58 compute-0 sudo[139404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 09:59:58 compute-0 sudo[139404]: pam_unix(sudo:session): session closed for user root
Jan 23 09:59:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 09:59:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c580b5d0 =====
Jan 23 09:59:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 09:59:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c580b5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:59:58 compute-0 radosgw[93748]: beast: 0x7fa5c580b5d0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:58.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:59:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 09:59:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:58.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 09:59:59 compute-0 ceph-mon[74335]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 936 B/s wr, 3 op/s
Jan 23 09:59:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 09:59:59 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 09:59:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/095959 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 09:59:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:59] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 09:59:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:09:59:59] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:00:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:00:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:00:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Jan 23 10:00:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 851 B/s wr, 2 op/s
Jan 23 10:00:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c580b5d0 =====
Jan 23 10:00:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:00.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c580b5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:00 compute-0 radosgw[93748]: beast: 0x7fa5c580b5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:00.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:00 compute-0 ceph-mon[74335]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:00:00 compute-0 ceph-mon[74335]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:00:00 compute-0 ceph-mon[74335]:      osd.1 observed slow operation indications in BlueStore
Jan 23 10:00:01 compute-0 ceph-mon[74335]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 851 B/s wr, 2 op/s
Jan 23 10:00:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:01 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 851 B/s wr, 2 op/s
Jan 23 10:00:02 compute-0 sshd-session[139434]: Accepted publickey for zuul from 192.168.122.30 port 53036 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:00:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:02 compute-0 systemd-logind[784]: New session 49 of user zuul.
Jan 23 10:00:02 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 23 10:00:02 compute-0 sshd-session[139434]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:00:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:02.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:02 compute-0 ceph-mon[74335]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 851 B/s wr, 2 op/s
Jan 23 10:00:02 compute-0 sudo[139587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsagrmeqqdbnqmoqhnekwzztrrikzyvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162402.344589-21-101427879837796/AnsiballZ_file.py'
Jan 23 10:00:02 compute-0 sudo[139587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:03 compute-0 python3.9[139589]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:03 compute-0 sudo[139587]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:03 compute-0 sudo[139740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgdomzadcfzxlxctnrnritlxamqtdhtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162403.3326924-57-175941298338645/AnsiballZ_stat.py'
Jan 23 10:00:03 compute-0 sudo[139740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:03 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 23 10:00:04 compute-0 python3.9[139742]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:04 compute-0 sudo[139740]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:04.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:04.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:04 compute-0 sudo[139864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbxzdslftguctvqibwaqpbijoezdjcnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162403.3326924-57-175941298338645/AnsiballZ_copy.py'
Jan 23 10:00:04 compute-0 sudo[139864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:04 compute-0 python3.9[139866]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162403.3326924-57-175941298338645/.source.conf _original_basename=ceph.conf follow=False checksum=c8d90d44a83782ff84a3d797d09c3b204e2d1c61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:04 compute-0 sudo[139864]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:00:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:00:05 compute-0 ceph-mon[74335]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 23 10:00:05 compute-0 sudo[140016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygkoanfuwpuhzkefkmqjrjhaizftvrip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162405.1176436-57-7120072118527/AnsiballZ_stat.py'
Jan 23 10:00:05 compute-0 sudo[140016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:05 compute-0 python3.9[140018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:05 compute-0 sudo[140016]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:05 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 23 10:00:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:00:06 compute-0 sudo[140141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jowphvvbhszctgazyrkgkqkenzactqdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162405.1176436-57-7120072118527/AnsiballZ_copy.py'
Jan 23 10:00:06 compute-0 sudo[140141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0046a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:06 compute-0 python3.9[140143]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162405.1176436-57-7120072118527/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=a6273c4bda164a032598e5e81cbd7f6e9c0876d5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:00:06 compute-0 sudo[140141]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:06.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:06 compute-0 sshd-session[139437]: Connection closed by 192.168.122.30 port 53036
Jan 23 10:00:06 compute-0 sshd-session[139434]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:00:06 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 23 10:00:06 compute-0 systemd[1]: session-49.scope: Consumed 3.075s CPU time.
Jan 23 10:00:06 compute-0 systemd-logind[784]: Session 49 logged out. Waiting for processes to exit.
Jan 23 10:00:06 compute-0 systemd-logind[784]: Removed session 49.
Jan 23 10:00:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:00:06.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:00:07 compute-0 ceph-mon[74335]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 23 10:00:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:07 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 852 B/s wr, 2 op/s
Jan 23 10:00:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:08.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:08.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:08 compute-0 ceph-mon[74335]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 852 B/s wr, 2 op/s
Jan 23 10:00:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:09 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:00:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:09 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:00:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:09 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:09] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:00:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:09] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:00:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Jan 23 10:00:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:10.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:11 compute-0 ceph-mon[74335]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Jan 23 10:00:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:11 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:00:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:00:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 10:00:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:12.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 10:00:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:12 compute-0 sshd-session[140174]: Accepted publickey for zuul from 192.168.122.30 port 58092 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:00:12 compute-0 systemd-logind[784]: New session 50 of user zuul.
Jan 23 10:00:12 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 23 10:00:12 compute-0 sshd-session[140174]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:00:13 compute-0 ceph-mon[74335]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:00:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:13 compute-0 python3.9[140327]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:00:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:13 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:00:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:14.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:14.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:14 compute-0 sudo[140483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpiavlzsrrpqqsclflgnzowmzbmdazgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162414.4264598-57-138492028542974/AnsiballZ_file.py'
Jan 23 10:00:14 compute-0 sudo[140483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:15 compute-0 python3.9[140485]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:00:15 compute-0 sudo[140483]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:15 compute-0 ceph-mon[74335]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:00:15 compute-0 sudo[140635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exfcszfkndduajfgachxqtpbnzbijtvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162415.2857997-57-225222254723985/AnsiballZ_file.py'
Jan 23 10:00:15 compute-0 sudo[140635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:15 compute-0 python3.9[140637]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:00:15 compute-0 sudo[140635]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:15 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:00:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:16.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:16.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:16 compute-0 python3.9[140789]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:00:16 compute-0 ceph-mon[74335]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:00:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:00:16.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:00:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100017 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:00:17 compute-0 sudo[140939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjbwrqsavxgdlxyslgzrasjibqmzlejf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162417.0220761-126-64535551221265/AnsiballZ_seboolean.py'
Jan 23 10:00:17 compute-0 sudo[140939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:17 compute-0 python3.9[140941]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 23 10:00:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:17 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:00:18 compute-0 sudo[140944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:00:18 compute-0 sudo[140944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:00:18 compute-0 sudo[140944]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:18.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:19 compute-0 sudo[140939]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:19 compute-0 ceph-mon[74335]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:00:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:19 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:00:19
Jan 23 10:00:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:00:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:00:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.nfs', 'backups', 'default.rgw.log', 'volumes', 'vms']
Jan 23 10:00:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:00:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:19] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:00:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:19] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:00:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:00:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:00:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:00:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:20.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:20 compute-0 sudo[141124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxyopsraprqndodbkryxilhvcffvloqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162420.2542737-156-20568179453421/AnsiballZ_setup.py'
Jan 23 10:00:20 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 23 10:00:20 compute-0 sudo[141124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:20 compute-0 python3.9[141126]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 10:00:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:00:21 compute-0 ceph-mon[74335]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Jan 23 10:00:21 compute-0 sudo[141124]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:21 compute-0 sudo[141208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djdqqekuhtyvthcjbfnomoujgalszzlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162420.2542737-156-20568179453421/AnsiballZ_dnf.py'
Jan 23 10:00:21 compute-0 sudo[141208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:21 compute-0 python3.9[141210]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 10:00:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:21 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Jan 23 10:00:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:22.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:22.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:22 compute-0 ceph-mon[74335]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Jan 23 10:00:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:23 compute-0 sudo[141208]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:23 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:00:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:24 compute-0 sudo[141366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpbxqzgaqbtlsmoezbxevyqhaualzowg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162423.669887-192-212810479682385/AnsiballZ_systemd.py'
Jan 23 10:00:24 compute-0 sudo[141366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 10:00:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:24.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 10:00:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:24.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:24 compute-0 python3.9[141368]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 10:00:24 compute-0 sudo[141366]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:25 compute-0 ceph-mon[74335]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:00:25 compute-0 sudo[141521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kokwtdfobylplmomyaqtwuomhentzfmu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769162424.9658477-216-187329388544806/AnsiballZ_edpm_nftables_snippet.py'
Jan 23 10:00:25 compute-0 sudo[141521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:25 compute-0 python3[141523]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 23 10:00:25 compute-0 sudo[141521]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:25 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:00:26 compute-0 sudo[141675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttoctexwwrzwqdauypgtodsumutkhlyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162425.9749146-243-83599558210987/AnsiballZ_file.py'
Jan 23 10:00:26 compute-0 sudo[141675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:26.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:26 compute-0 python3.9[141677]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:26 compute-0 ceph-mon[74335]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:00:26 compute-0 sudo[141675]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:00:27.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:00:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:00:27.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:00:27 compute-0 sudo[141827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-datdictzxuhovwhtvimtadkhixapzxvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162426.8442209-267-112366473608214/AnsiballZ_stat.py'
Jan 23 10:00:27 compute-0 sudo[141827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:27 compute-0 python3.9[141829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:27 compute-0 sudo[141827]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:27 compute-0 sudo[141906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umtewtzsyhhevzxmyczxlxltwkktcgzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162426.8442209-267-112366473608214/AnsiballZ_file.py'
Jan 23 10:00:27 compute-0 sudo[141906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:27 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:28 compute-0 python3.9[141908]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:00:28 compute-0 sudo[141906]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:28 compute-0 sudo[142061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neyyneteundkyicnhiljaaqlzkjejoak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162428.203961-303-148853957240292/AnsiballZ_stat.py'
Jan 23 10:00:28 compute-0 sudo[142061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:28.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:28 compute-0 python3.9[142063]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:28 compute-0 sudo[142061]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:28 compute-0 sudo[142139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwmbkhnjezwlgryvnqfsqosmundlfakf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162428.203961-303-148853957240292/AnsiballZ_file.py'
Jan 23 10:00:28 compute-0 sudo[142139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:29 compute-0 python3.9[142141]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.vp3uwum4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:29 compute-0 sudo[142139]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:29 compute-0 ceph-mon[74335]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:00:29 compute-0 sudo[142292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngnmotqvlglshnnbscbrhxsbuxpdoidq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162429.4374256-339-16887299027099/AnsiballZ_stat.py'
Jan 23 10:00:29 compute-0 sudo[142292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:29 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70003260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:29 compute-0 python3.9[142294]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:29] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:00:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:29] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:00:29 compute-0 sudo[142292]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:30 compute-0 sudo[142371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlgturfqcnjjulppxdmfubehuxrxuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162429.4374256-339-16887299027099/AnsiballZ_file.py'
Jan 23 10:00:30 compute-0 sudo[142371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:30.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:30 compute-0 python3.9[142373]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:30 compute-0 sudo[142371]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:31 compute-0 sudo[142523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvqihnsbpnjbkdtklgjxefevonoezxwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162430.8915184-378-153027991582387/AnsiballZ_command.py'
Jan 23 10:00:31 compute-0 sudo[142523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:31 compute-0 ceph-mon[74335]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:31 compute-0 python3.9[142525]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:00:31 compute-0 sudo[142523]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:31 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c003f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001660 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:32 compute-0 sudo[142678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkhtueycbjztcsbguxggvdwlodbflxzk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769162431.8063154-402-217004586130518/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 10:00:32 compute-0 sudo[142678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:32 compute-0 python3[142680]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 10:00:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:32 compute-0 sudo[142678]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:32.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:32.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:32 compute-0 ceph-mon[74335]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:33 compute-0 sudo[142830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxkntttwmrdckvcoqwqigzvygwawyguh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162432.7206423-426-90617141995373/AnsiballZ_stat.py'
Jan 23 10:00:33 compute-0 sudo[142830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:33 compute-0 python3.9[142832]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:33 compute-0 sudo[142830]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:33 compute-0 sudo[142956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqsibbmbmxzrxwehudsyvhzajwzbpzrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162432.7206423-426-90617141995373/AnsiballZ_copy.py'
Jan 23 10:00:33 compute-0 sudo[142956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:33 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001660 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:34 compute-0 python3.9[142958]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162432.7206423-426-90617141995373/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:34 compute-0 sudo[142956]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:34.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:34.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:34 compute-0 sudo[143109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgxdipjafkdjcqkqfynjyvhuytuiwedy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162434.2734218-471-65527680394913/AnsiballZ_stat.py'
Jan 23 10:00:34 compute-0 sudo[143109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:00:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:00:35 compute-0 python3.9[143111]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:35 compute-0 sudo[143109]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:35 compute-0 ceph-mon[74335]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:00:35 compute-0 sudo[143234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enjjgymhtgkdlbgaiyojnlyaovmkotwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162434.2734218-471-65527680394913/AnsiballZ_copy.py'
Jan 23 10:00:35 compute-0 sudo[143234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:35 compute-0 python3.9[143236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162434.2734218-471-65527680394913/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:35 compute-0 sudo[143234]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:35 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:36 compute-0 sudo[143388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrsrmdcxpzmwdvdeyhsvkwwzddfqbaok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162435.9211535-516-80652298904981/AnsiballZ_stat.py'
Jan 23 10:00:36 compute-0 sudo[143388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:36.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:36 compute-0 python3.9[143390]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:36 compute-0 sudo[143388]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:36 compute-0 sudo[143513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnfibsrgsvxdopxbzktjirfxeenlixnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162435.9211535-516-80652298904981/AnsiballZ_copy.py'
Jan 23 10:00:36 compute-0 sudo[143513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:00:37.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:00:37 compute-0 python3.9[143515]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162435.9211535-516-80652298904981/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:37 compute-0 sudo[143513]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:37 compute-0 ceph-mon[74335]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:37 compute-0 sudo[143665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjwsmchdmvmknodtiunqvxemheeftra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162437.4153042-561-112584781449008/AnsiballZ_stat.py'
Jan 23 10:00:37 compute-0 sudo[143665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:37 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:37 compute-0 python3.9[143668]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:37 compute-0 sudo[143665]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:00:38 compute-0 sudo[143732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:00:38 compute-0 sudo[143732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:00:38 compute-0 sudo[143732]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:38 compute-0 sudo[143817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vosfqoaiitphrdtpklvtnqkjhfnivhvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162437.4153042-561-112584781449008/AnsiballZ_copy.py'
Jan 23 10:00:38 compute-0 sudo[143817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:38 compute-0 python3.9[143819]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162437.4153042-561-112584781449008/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:38 compute-0 sudo[143817]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:38.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:38.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:38 compute-0 ceph-mon[74335]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:00:39 compute-0 sudo[143969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtuqarlzyapvornhfzcdlqvnshihcwcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162438.6952746-606-243564134698225/AnsiballZ_stat.py'
Jan 23 10:00:39 compute-0 sudo[143969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:39 compute-0 python3.9[143971]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:39 compute-0 sudo[143969]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:39 compute-0 sudo[144095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqiyxxppjdniksvxmlzhekhfakpbqbtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162438.6952746-606-243564134698225/AnsiballZ_copy.py'
Jan 23 10:00:39 compute-0 sudo[144095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:39 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e800040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:39 compute-0 python3.9[144097]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162438.6952746-606-243564134698225/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:39 compute-0 sudo[144095]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:39] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:00:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:39] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:00:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84001810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:40 compute-0 sudo[144248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyjgmjspvbcgdbykfkzkjztvcoizlxkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162440.151031-651-46942487956014/AnsiballZ_file.py'
Jan 23 10:00:40 compute-0 sudo[144248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:40.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:40.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:40 compute-0 python3.9[144250]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:40 compute-0 sudo[144248]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:41 compute-0 sudo[144400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbsafmoyifuahinskymkfhpwuhogtglm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162440.8941898-675-255156679378676/AnsiballZ_command.py'
Jan 23 10:00:41 compute-0 sudo[144400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:41 compute-0 ceph-mon[74335]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:41 compute-0 python3.9[144402]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:00:41 compute-0 sudo[144400]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:41 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:42 compute-0 sudo[144557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjuadwszbeudwfiqfskcofgzexpwsazq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162441.6702414-699-163879440249145/AnsiballZ_blockinfile.py'
Jan 23 10:00:42 compute-0 sudo[144557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:42 compute-0 python3.9[144559]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:42 compute-0 sudo[144557]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:42.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:42.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:42 compute-0 sudo[144709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snjwyktllmybeffaylcjrtripwquqoro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162442.585813-726-127381036325762/AnsiballZ_command.py'
Jan 23 10:00:42 compute-0 sudo[144709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:43 compute-0 python3.9[144711]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:00:43 compute-0 sudo[144709]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:43 compute-0 ceph-mon[74335]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:43 compute-0 sudo[144862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlilluxkknwtoybyelzitwyonwgvriqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162443.302853-750-61220854389256/AnsiballZ_stat.py'
Jan 23 10:00:43 compute-0 sudo[144862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:43 compute-0 python3.9[144864]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:00:43 compute-0 sudo[144862]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:43 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:44 compute-0 sudo[145018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrvslsvewwlzdmlujixtgkybdfmckbwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162444.222229-774-141665802296066/AnsiballZ_command.py'
Jan 23 10:00:44 compute-0 sudo[145018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:44.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:44.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:44 compute-0 python3.9[145020]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:00:44 compute-0 sudo[145018]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:45 compute-0 ceph-mon[74335]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:45 compute-0 sudo[145173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnahxdovilsrqzcnfmnduwzuwgdqadj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162444.9404097-798-256717188135506/AnsiballZ_file.py'
Jan 23 10:00:45 compute-0 sudo[145173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:45 compute-0 python3.9[145175]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:45 compute-0 sudo[145173]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:45 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:46.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:46.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:46 compute-0 python3.9[145327]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:00:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:00:47.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:00:47 compute-0 ceph-mon[74335]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:47 compute-0 sudo[145479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrwostgepilskxalohztotbxitvuthje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162447.4980621-918-226953035271674/AnsiballZ_command.py'
Jan 23 10:00:47 compute-0 sudo[145479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:47 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:47 compute-0 python3.9[145481]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:00:47 compute-0 ovs-vsctl[145482]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 23 10:00:48 compute-0 sudo[145479]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:00:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:48.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:48.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:48 compute-0 sudo[145633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpgfgdwwsxszpqyptzebbvrtxcajmqeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162448.5374537-945-98711670196492/AnsiballZ_command.py'
Jan 23 10:00:48 compute-0 sudo[145633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:49 compute-0 python3.9[145635]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:00:49 compute-0 sudo[145633]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:49 compute-0 sudo[145788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aobfkvlxmknbacdiennqbjijbinjfugr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162449.229831-969-230684563155070/AnsiballZ_command.py'
Jan 23 10:00:49 compute-0 sudo[145788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:49 compute-0 python3.9[145790]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:00:49 compute-0 ovs-vsctl[145791]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 23 10:00:49 compute-0 sudo[145788]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:49 compute-0 sudo[145817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:00:49 compute-0 sudo[145817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:00:49 compute-0 sudo[145817]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:49 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:49 compute-0 sudo[145842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:00:49 compute-0 sudo[145842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:00:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:49] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:00:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:49] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:00:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:00:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:00:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:00:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:00:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:00:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:00:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:00:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:00:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:00:50 compute-0 sudo[145842]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:50.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:50.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:51 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:00:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:52.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:00:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:52.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:53 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:54.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:00:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:54.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:00:55 compute-0 ceph-mon[74335]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:00:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:00:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:00:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:00:55 compute-0 python3.9[146029]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:00:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:55 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:00:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:00:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:00:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:00:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:00:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:00:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:00:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:00:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:00:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:56 compute-0 ceph-mon[74335]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:56 compute-0 ceph-mon[74335]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:00:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:00:56 compute-0 sudo[146132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:00:56 compute-0 sudo[146132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:00:56 compute-0 sudo[146132]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:56 compute-0 sudo[146181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:00:56 compute-0 sudo[146181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:00:56 compute-0 sudo[146230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohvevhgizfcidobmfftbvccfmpyyimkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162456.2374818-1020-280106052705967/AnsiballZ_file.py'
Jan 23 10:00:56 compute-0 sudo[146230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:56.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:56.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:56 compute-0 python3.9[146234]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:00:56 compute-0 sudo[146230]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:56 compute-0 podman[146275]: 2026-01-23 10:00:56.983560133 +0000 UTC m=+0.091253035 container create 916e38b8ca6e3decad0118c9d770a6e342d369dcfcab572d09b7535872451525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_keldysh, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 23 10:00:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:00:57.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:00:57 compute-0 podman[146275]: 2026-01-23 10:00:56.918102128 +0000 UTC m=+0.025795050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:00:57 compute-0 ceph-mgr[74633]: [dashboard INFO request] [192.168.122.100:48312] [POST] [200] [0.004s] [4.0B] [ce011c72-2f95-402f-aa57-9c27de8a1d5f] /api/prometheus_receiver
Jan 23 10:00:57 compute-0 systemd[1]: Started libpod-conmon-916e38b8ca6e3decad0118c9d770a6e342d369dcfcab572d09b7535872451525.scope.
Jan 23 10:00:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:00:57 compute-0 podman[146275]: 2026-01-23 10:00:57.17824582 +0000 UTC m=+0.285938752 container init 916e38b8ca6e3decad0118c9d770a6e342d369dcfcab572d09b7535872451525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 10:00:57 compute-0 podman[146275]: 2026-01-23 10:00:57.191207221 +0000 UTC m=+0.298900123 container start 916e38b8ca6e3decad0118c9d770a6e342d369dcfcab572d09b7535872451525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_keldysh, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:00:57 compute-0 podman[146275]: 2026-01-23 10:00:57.196480672 +0000 UTC m=+0.304173604 container attach 916e38b8ca6e3decad0118c9d770a6e342d369dcfcab572d09b7535872451525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_keldysh, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:00:57 compute-0 peaceful_keldysh[146338]: 167 167
Jan 23 10:00:57 compute-0 systemd[1]: libpod-916e38b8ca6e3decad0118c9d770a6e342d369dcfcab572d09b7535872451525.scope: Deactivated successfully.
Jan 23 10:00:57 compute-0 podman[146275]: 2026-01-23 10:00:57.200315722 +0000 UTC m=+0.308008624 container died 916e38b8ca6e3decad0118c9d770a6e342d369dcfcab572d09b7535872451525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 23 10:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8becf75404958f29d98e2c519b5dfe098f1ec580f8490c0691909890ff6fe02d-merged.mount: Deactivated successfully.
Jan 23 10:00:57 compute-0 podman[146275]: 2026-01-23 10:00:57.253228898 +0000 UTC m=+0.360921800 container remove 916e38b8ca6e3decad0118c9d770a6e342d369dcfcab572d09b7535872451525 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_keldysh, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:00:57 compute-0 systemd[1]: libpod-conmon-916e38b8ca6e3decad0118c9d770a6e342d369dcfcab572d09b7535872451525.scope: Deactivated successfully.
Jan 23 10:00:57 compute-0 sudo[146460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zksewzuxcgztzsschaoevewdufwnwyxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162457.0414305-1044-213064202063614/AnsiballZ_stat.py'
Jan 23 10:00:57 compute-0 sudo[146460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:57 compute-0 podman[146468]: 2026-01-23 10:00:57.404897143 +0000 UTC m=+0.025276895 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:00:57 compute-0 podman[146468]: 2026-01-23 10:00:57.511257949 +0000 UTC m=+0.131637681 container create a89e2170c5148fcb36b35903cdb7be835190d04330e6e44dc279347cdcaa3640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:00:57 compute-0 ceph-mon[74335]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:00:57 compute-0 python3.9[146462]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:57 compute-0 sudo[146460]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:57 compute-0 systemd[1]: Started libpod-conmon-a89e2170c5148fcb36b35903cdb7be835190d04330e6e44dc279347cdcaa3640.scope.
Jan 23 10:00:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83289088a33472fe07b2f4cc5a74e248fdcf62b50b27ee2353238388c4940f2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83289088a33472fe07b2f4cc5a74e248fdcf62b50b27ee2353238388c4940f2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83289088a33472fe07b2f4cc5a74e248fdcf62b50b27ee2353238388c4940f2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83289088a33472fe07b2f4cc5a74e248fdcf62b50b27ee2353238388c4940f2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83289088a33472fe07b2f4cc5a74e248fdcf62b50b27ee2353238388c4940f2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:00:57 compute-0 podman[146468]: 2026-01-23 10:00:57.688917349 +0000 UTC m=+0.309297111 container init a89e2170c5148fcb36b35903cdb7be835190d04330e6e44dc279347cdcaa3640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 10:00:57 compute-0 podman[146468]: 2026-01-23 10:00:57.697710611 +0000 UTC m=+0.318090343 container start a89e2170c5148fcb36b35903cdb7be835190d04330e6e44dc279347cdcaa3640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:00:57 compute-0 podman[146468]: 2026-01-23 10:00:57.703416814 +0000 UTC m=+0.323796576 container attach a89e2170c5148fcb36b35903cdb7be835190d04330e6e44dc279347cdcaa3640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_rosalind, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:00:57 compute-0 sudo[146566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dleaxbkxmwwpppkzuecfuwipyqfzntha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162457.0414305-1044-213064202063614/AnsiballZ_file.py'
Jan 23 10:00:57 compute-0 sudo[146566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:57 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:58 compute-0 python3.9[146568]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:00:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:00:58 compute-0 compassionate_rosalind[146487]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:00:58 compute-0 compassionate_rosalind[146487]: --> All data devices are unavailable
Jan 23 10:00:58 compute-0 sudo[146566]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:58 compute-0 systemd[1]: libpod-a89e2170c5148fcb36b35903cdb7be835190d04330e6e44dc279347cdcaa3640.scope: Deactivated successfully.
Jan 23 10:00:58 compute-0 podman[146468]: 2026-01-23 10:00:58.112666057 +0000 UTC m=+0.733045799 container died a89e2170c5148fcb36b35903cdb7be835190d04330e6e44dc279347cdcaa3640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_rosalind, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:00:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-83289088a33472fe07b2f4cc5a74e248fdcf62b50b27ee2353238388c4940f2e-merged.mount: Deactivated successfully.
Jan 23 10:00:58 compute-0 podman[146468]: 2026-01-23 10:00:58.214251637 +0000 UTC m=+0.834631389 container remove a89e2170c5148fcb36b35903cdb7be835190d04330e6e44dc279347cdcaa3640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_rosalind, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:00:58 compute-0 systemd[1]: libpod-conmon-a89e2170c5148fcb36b35903cdb7be835190d04330e6e44dc279347cdcaa3640.scope: Deactivated successfully.
Jan 23 10:00:58 compute-0 sudo[146636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:00:58 compute-0 sudo[146636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:00:58 compute-0 sudo[146181]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:58 compute-0 sudo[146636]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70002620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:58 compute-0 sudo[146693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:00:58 compute-0 sudo[146693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:00:58 compute-0 sudo[146693]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:58 compute-0 sudo[146741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:00:58 compute-0 sudo[146741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:00:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:00:58 compute-0 sudo[146816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjubhcupgzxyrqcsghhgtagxmhoibzna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162458.2313206-1044-1185343142459/AnsiballZ_stat.py'
Jan 23 10:00:58 compute-0 sudo[146816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:58 compute-0 ceph-mon[74335]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:00:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:58.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:00:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:00:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:58.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:00:58 compute-0 python3.9[146818]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:00:58 compute-0 sudo[146816]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:58 compute-0 podman[146860]: 2026-01-23 10:00:58.840740413 +0000 UTC m=+0.046779601 container create abeb88849eb282607168ac6536171a8d3dcea3b46fe8222c021582bab57e55bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_feynman, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:00:58 compute-0 systemd[1]: Started libpod-conmon-abeb88849eb282607168ac6536171a8d3dcea3b46fe8222c021582bab57e55bb.scope.
Jan 23 10:00:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:00:58 compute-0 podman[146860]: 2026-01-23 10:00:58.821130281 +0000 UTC m=+0.027169499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:00:58 compute-0 podman[146860]: 2026-01-23 10:00:58.928618201 +0000 UTC m=+0.134657409 container init abeb88849eb282607168ac6536171a8d3dcea3b46fe8222c021582bab57e55bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:00:58 compute-0 podman[146860]: 2026-01-23 10:00:58.936161397 +0000 UTC m=+0.142200585 container start abeb88849eb282607168ac6536171a8d3dcea3b46fe8222c021582bab57e55bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_feynman, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:00:58 compute-0 podman[146860]: 2026-01-23 10:00:58.940076609 +0000 UTC m=+0.146115817 container attach abeb88849eb282607168ac6536171a8d3dcea3b46fe8222c021582bab57e55bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_feynman, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 10:00:58 compute-0 wonderful_feynman[146900]: 167 167
Jan 23 10:00:58 compute-0 systemd[1]: libpod-abeb88849eb282607168ac6536171a8d3dcea3b46fe8222c021582bab57e55bb.scope: Deactivated successfully.
Jan 23 10:00:58 compute-0 podman[146860]: 2026-01-23 10:00:58.942491698 +0000 UTC m=+0.148530906 container died abeb88849eb282607168ac6536171a8d3dcea3b46fe8222c021582bab57e55bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 10:00:59 compute-0 sudo[146968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsjavgsavoloiychtcdpnjllruuecltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162458.2313206-1044-1185343142459/AnsiballZ_file.py'
Jan 23 10:00:59 compute-0 sudo[146968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a4e6f7dd32e6e135c6a89fbdfcc17457ed0e470e0a8568fbe087d409c2ab0f7-merged.mount: Deactivated successfully.
Jan 23 10:00:59 compute-0 python3.9[146970]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:00:59 compute-0 sudo[146968]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:59 compute-0 sudo[147120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fczenoudefzjsnaoddmbijrigqoifdnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162459.3906465-1113-271233941699243/AnsiballZ_file.py'
Jan 23 10:00:59 compute-0 sudo[147120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:00:59 compute-0 python3.9[147122]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:00:59 compute-0 sudo[147120]: pam_unix(sudo:session): session closed for user root
Jan 23 10:00:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:00:59 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0012f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:00:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:59] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Jan 23 10:00:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:00:59] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Jan 23 10:01:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:00 compute-0 podman[146860]: 2026-01-23 10:01:00.146662783 +0000 UTC m=+1.352701971 container remove abeb88849eb282607168ac6536171a8d3dcea3b46fe8222c021582bab57e55bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 10:01:00 compute-0 systemd[1]: libpod-conmon-abeb88849eb282607168ac6536171a8d3dcea3b46fe8222c021582bab57e55bb.scope: Deactivated successfully.
Jan 23 10:01:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:00 compute-0 podman[147233]: 2026-01-23 10:01:00.317074625 +0000 UTC m=+0.044969120 container create b676c6192bcc896bb142028e424ed61c706aabfcbeb93939d56689b30177e534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sinoussi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 23 10:01:00 compute-0 systemd[1]: Started libpod-conmon-b676c6192bcc896bb142028e424ed61c706aabfcbeb93939d56689b30177e534.scope.
Jan 23 10:01:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:01:00 compute-0 podman[147233]: 2026-01-23 10:01:00.298513273 +0000 UTC m=+0.026407788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7c8291d92cccbf0fc6ed7badd66d3f8d34cdab6e42926b69a47dafd4c794ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7c8291d92cccbf0fc6ed7badd66d3f8d34cdab6e42926b69a47dafd4c794ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7c8291d92cccbf0fc6ed7badd66d3f8d34cdab6e42926b69a47dafd4c794ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7c8291d92cccbf0fc6ed7badd66d3f8d34cdab6e42926b69a47dafd4c794ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:01:00 compute-0 podman[147233]: 2026-01-23 10:01:00.411928732 +0000 UTC m=+0.139823237 container init b676c6192bcc896bb142028e424ed61c706aabfcbeb93939d56689b30177e534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 10:01:00 compute-0 podman[147233]: 2026-01-23 10:01:00.421197897 +0000 UTC m=+0.149092392 container start b676c6192bcc896bb142028e424ed61c706aabfcbeb93939d56689b30177e534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sinoussi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 10:01:00 compute-0 podman[147233]: 2026-01-23 10:01:00.429209497 +0000 UTC m=+0.157103992 container attach b676c6192bcc896bb142028e424ed61c706aabfcbeb93939d56689b30177e534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 10:01:00 compute-0 sudo[147303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsfcugikhqpfolzkogqdvrusceiqwncl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162460.0721912-1137-115997028941078/AnsiballZ_stat.py'
Jan 23 10:01:00 compute-0 sudo[147303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:00.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:00 compute-0 python3.9[147306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:01:00 compute-0 sudo[147303]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]: {
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:     "1": [
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:         {
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "devices": [
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "/dev/loop3"
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             ],
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "lv_name": "ceph_lv0",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "lv_size": "21470642176",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "name": "ceph_lv0",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "tags": {
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.cluster_name": "ceph",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.crush_device_class": "",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.encrypted": "0",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.osd_id": "1",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.type": "block",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.vdo": "0",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:                 "ceph.with_tpm": "0"
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             },
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "type": "block",
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:             "vg_name": "ceph_vg0"
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:         }
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]:     ]
Jan 23 10:01:00 compute-0 trusting_sinoussi[147273]: }
Jan 23 10:01:00 compute-0 systemd[1]: libpod-b676c6192bcc896bb142028e424ed61c706aabfcbeb93939d56689b30177e534.scope: Deactivated successfully.
Jan 23 10:01:00 compute-0 podman[147233]: 2026-01-23 10:01:00.790689812 +0000 UTC m=+0.518584327 container died b676c6192bcc896bb142028e424ed61c706aabfcbeb93939d56689b30177e534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Jan 23 10:01:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c7c8291d92cccbf0fc6ed7badd66d3f8d34cdab6e42926b69a47dafd4c794ff-merged.mount: Deactivated successfully.
Jan 23 10:01:00 compute-0 podman[147233]: 2026-01-23 10:01:00.852337048 +0000 UTC m=+0.580231543 container remove b676c6192bcc896bb142028e424ed61c706aabfcbeb93939d56689b30177e534 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_sinoussi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 23 10:01:00 compute-0 systemd[1]: libpod-conmon-b676c6192bcc896bb142028e424ed61c706aabfcbeb93939d56689b30177e534.scope: Deactivated successfully.
Jan 23 10:01:00 compute-0 sudo[146741]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:00 compute-0 sudo[147374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:01:00 compute-0 sudo[147420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihzfqiggeldgrjlynxbigteizpwxbwjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162460.0721912-1137-115997028941078/AnsiballZ_file.py'
Jan 23 10:01:00 compute-0 sudo[147374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:01:00 compute-0 sudo[147420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:00 compute-0 sudo[147374]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:01 compute-0 sudo[147425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:01:01 compute-0 sudo[147425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:01:01 compute-0 ceph-mon[74335]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:01 compute-0 python3.9[147424]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:01 compute-0 sudo[147420]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:01 compute-0 CROND[147492]: (root) CMD (run-parts /etc/cron.hourly)
Jan 23 10:01:01 compute-0 run-parts[147503]: (/etc/cron.hourly) starting 0anacron
Jan 23 10:01:01 compute-0 anacron[147517]: Anacron started on 2026-01-23
Jan 23 10:01:01 compute-0 anacron[147517]: Will run job `cron.daily' in 45 min.
Jan 23 10:01:01 compute-0 anacron[147517]: Will run job `cron.weekly' in 65 min.
Jan 23 10:01:01 compute-0 anacron[147517]: Will run job `cron.monthly' in 85 min.
Jan 23 10:01:01 compute-0 anacron[147517]: Jobs will be executed sequentially
Jan 23 10:01:01 compute-0 run-parts[147521]: (/etc/cron.hourly) finished 0anacron
Jan 23 10:01:01 compute-0 CROND[147486]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 23 10:01:01 compute-0 podman[147526]: 2026-01-23 10:01:01.521443284 +0000 UTC m=+0.057665942 container create 0224f48fd154f6532aadde0078acad95c244a709d29da175e581e1e58edb2ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:01:01 compute-0 systemd[1]: Started libpod-conmon-0224f48fd154f6532aadde0078acad95c244a709d29da175e581e1e58edb2ed7.scope.
Jan 23 10:01:01 compute-0 podman[147526]: 2026-01-23 10:01:01.49373574 +0000 UTC m=+0.029958418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:01:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:01:01 compute-0 podman[147526]: 2026-01-23 10:01:01.625995229 +0000 UTC m=+0.162217917 container init 0224f48fd154f6532aadde0078acad95c244a709d29da175e581e1e58edb2ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 10:01:01 compute-0 podman[147526]: 2026-01-23 10:01:01.637978782 +0000 UTC m=+0.174201440 container start 0224f48fd154f6532aadde0078acad95c244a709d29da175e581e1e58edb2ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:01:01 compute-0 podman[147526]: 2026-01-23 10:01:01.643260704 +0000 UTC m=+0.179483362 container attach 0224f48fd154f6532aadde0078acad95c244a709d29da175e581e1e58edb2ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:01:01 compute-0 keen_bartik[147579]: 167 167
Jan 23 10:01:01 compute-0 systemd[1]: libpod-0224f48fd154f6532aadde0078acad95c244a709d29da175e581e1e58edb2ed7.scope: Deactivated successfully.
Jan 23 10:01:01 compute-0 podman[147526]: 2026-01-23 10:01:01.647377882 +0000 UTC m=+0.183600570 container died 0224f48fd154f6532aadde0078acad95c244a709d29da175e581e1e58edb2ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:01:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ee400e24a0d840de216b007ce926c36d07e094ab55a8f4d9d84185bfd5bee3-merged.mount: Deactivated successfully.
Jan 23 10:01:01 compute-0 podman[147526]: 2026-01-23 10:01:01.69967364 +0000 UTC m=+0.235896298 container remove 0224f48fd154f6532aadde0078acad95c244a709d29da175e581e1e58edb2ed7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 10:01:01 compute-0 systemd[1]: libpod-conmon-0224f48fd154f6532aadde0078acad95c244a709d29da175e581e1e58edb2ed7.scope: Deactivated successfully.
Jan 23 10:01:01 compute-0 sudo[147704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfjixajdylpbysthpnojydkcpsawafnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162461.5412967-1173-229923606356845/AnsiballZ_stat.py'
Jan 23 10:01:01 compute-0 sudo[147704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:01 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:01 compute-0 podman[147673]: 2026-01-23 10:01:01.860465846 +0000 UTC m=+0.030708091 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:01:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:02 compute-0 python3.9[147709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:01:02 compute-0 sudo[147704]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c3d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:02 compute-0 sudo[147786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwymaxdbbagvnwsktysfnhyktsqrhpdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162461.5412967-1173-229923606356845/AnsiballZ_file.py'
Jan 23 10:01:02 compute-0 sudo[147786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:02 compute-0 podman[147673]: 2026-01-23 10:01:02.423957668 +0000 UTC m=+0.594199883 container create 8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 10:01:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:02 compute-0 systemd[1]: Started libpod-conmon-8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44.scope.
Jan 23 10:01:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:01:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:02.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:01:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db72a535adad00154b56a1d382995708f4eb6d69c78247a38b16c99a56cf1a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:01:02 compute-0 python3.9[147788]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db72a535adad00154b56a1d382995708f4eb6d69c78247a38b16c99a56cf1a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db72a535adad00154b56a1d382995708f4eb6d69c78247a38b16c99a56cf1a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db72a535adad00154b56a1d382995708f4eb6d69c78247a38b16c99a56cf1a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:01:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:02.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:02 compute-0 sudo[147786]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:02 compute-0 podman[147673]: 2026-01-23 10:01:02.645953477 +0000 UTC m=+0.816195712 container init 8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Jan 23 10:01:02 compute-0 podman[147673]: 2026-01-23 10:01:02.654333207 +0000 UTC m=+0.824575422 container start 8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:01:02 compute-0 podman[147673]: 2026-01-23 10:01:02.661097351 +0000 UTC m=+0.831339566 container attach 8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 10:01:02 compute-0 ceph-mon[74335]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:03 compute-0 sudo[147972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnoxgnusvqdxdoftsabndobqhgthkdyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162462.7872705-1209-40907728228323/AnsiballZ_systemd.py'
Jan 23 10:01:03 compute-0 sudo[147972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:03 compute-0 lvm[148017]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:01:03 compute-0 lvm[148017]: VG ceph_vg0 finished
Jan 23 10:01:03 compute-0 python3.9[147979]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:01:03 compute-0 systemd[1]: Reloading.
Jan 23 10:01:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:03 compute-0 compassionate_raman[147791]: {}
Jan 23 10:01:03 compute-0 systemd-sysv-generator[148050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:01:03 compute-0 systemd-rc-local-generator[148047]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:01:03 compute-0 podman[148056]: 2026-01-23 10:01:03.575194416 +0000 UTC m=+0.034652973 container died 8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:01:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:03 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78001400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:03 compute-0 systemd[1]: libpod-8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44.scope: Deactivated successfully.
Jan 23 10:01:03 compute-0 systemd[1]: libpod-8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44.scope: Consumed 1.367s CPU time.
Jan 23 10:01:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5db72a535adad00154b56a1d382995708f4eb6d69c78247a38b16c99a56cf1a0-merged.mount: Deactivated successfully.
Jan 23 10:01:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:04 compute-0 sudo[147972]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:04 compute-0 podman[148056]: 2026-01-23 10:01:04.231188358 +0000 UTC m=+0.690646895 container remove 8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 10:01:04 compute-0 systemd[1]: libpod-conmon-8e3d644cc9f2be7c81f872bc7cad6cf3a1daff146a7729f731f80bfd0d331d44.scope: Deactivated successfully.
Jan 23 10:01:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:04 compute-0 sudo[147425]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:01:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:01:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:01:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c3d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:01:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:04.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:04 compute-0 sudo[148153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:01:04 compute-0 sudo[148153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:01:04 compute-0 sudo[148153]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:04.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:04 compute-0 sudo[148248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzwphbeelgxzvmhoknlsvhroirehwvxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162464.4530306-1233-208249095621064/AnsiballZ_stat.py'
Jan 23 10:01:04 compute-0 sudo[148248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:01:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:01:05 compute-0 python3.9[148250]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:01:05 compute-0 sudo[148248]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:05 compute-0 sudo[148326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cccrgvdbtmamezxeewjupdtyymbhvhxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162464.4530306-1233-208249095621064/AnsiballZ_file.py'
Jan 23 10:01:05 compute-0 sudo[148326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:05 compute-0 ceph-mon[74335]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:01:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:01:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:01:05 compute-0 python3.9[148328]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:05 compute-0 sudo[148326]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:05 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:06 compute-0 sudo[148480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exgzbyefkcqzxzrlqsatmedcjaldexdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162466.15113-1269-245592428930205/AnsiballZ_stat.py'
Jan 23 10:01:06 compute-0 sudo[148480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:06.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:06.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:06 compute-0 ceph-mon[74335]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:06 compute-0 python3.9[148482]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:01:06 compute-0 sudo[148480]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:06 compute-0 sudo[148558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utbjmrxncidbkscguivsoirlxnhgskzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162466.15113-1269-245592428930205/AnsiballZ_file.py'
Jan 23 10:01:06 compute-0 sudo[148558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:07 compute-0 python3.9[148560]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:07 compute-0 sudo[148558]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:07 compute-0 sudo[148710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiwznwxsckcenruvvhwfljdwdurwlnks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162467.3207996-1305-191383347339318/AnsiballZ_systemd.py'
Jan 23 10:01:07 compute-0 sudo[148710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:07 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c3d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:07 compute-0 python3.9[148712]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:01:07 compute-0 systemd[1]: Reloading.
Jan 23 10:01:08 compute-0 systemd-rc-local-generator[148742]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:01:08 compute-0 systemd-sysv-generator[148747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:01:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:01:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c3d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:08 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 10:01:08 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 10:01:08 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 10:01:08 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 10:01:08 compute-0 sudo[148710]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:08.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:08.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:09 compute-0 sudo[148906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frgsmoapvmrbvedenrdasqlgspohvwoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162468.806592-1335-148544462229089/AnsiballZ_file.py'
Jan 23 10:01:09 compute-0 sudo[148906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:09 compute-0 python3.9[148908]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:01:09 compute-0 sudo[148906]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:09 compute-0 ceph-mon[74335]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:01:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:09 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:09] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Jan 23 10:01:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:09] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Jan 23 10:01:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:10 compute-0 sudo[149060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoxdwltzksouyuymsnojhbngetvzmirl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162469.8771136-1359-254157627860487/AnsiballZ_stat.py'
Jan 23 10:01:10 compute-0 sudo[149060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:10 compute-0 python3.9[149062]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:01:10 compute-0 sudo[149060]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:10.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:10.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:10 compute-0 ceph-mon[74335]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:10 compute-0 sudo[149183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmpgxziojgxzezvldtzolabjbkmfrlpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162469.8771136-1359-254157627860487/AnsiballZ_copy.py'
Jan 23 10:01:10 compute-0 sudo[149183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:11 compute-0 python3.9[149185]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769162469.8771136-1359-254157627860487/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:01:11 compute-0 sudo[149183]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:11 compute-0 sudo[149336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixtfzuhdjicerxiigczgrxnmitkxnqul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162471.604137-1410-29467909175600/AnsiballZ_file.py'
Jan 23 10:01:11 compute-0 sudo[149336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:11 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:12 compute-0 python3.9[149338]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:12 compute-0 sudo[149336]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78003240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:12 compute-0 sudo[149489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjacuedkwnitnlqfgdjcyhzsriwkveao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162472.3322158-1434-35032969395787/AnsiballZ_file.py'
Jan 23 10:01:12 compute-0 sudo[149489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:12.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:12.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:12 compute-0 python3.9[149491]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:01:12 compute-0 sudo[149489]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:13 compute-0 sudo[149641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekokimdjjzlxplssauwwjoypylfbeddf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162473.1408033-1458-20497680599328/AnsiballZ_stat.py'
Jan 23 10:01:13 compute-0 sudo[149641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:13 compute-0 ceph-mon[74335]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:13 compute-0 python3.9[149643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:01:13 compute-0 sudo[149641]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:13 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:13 compute-0 sudo[149765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlolwzqmeimxefzeknvsuodijidusnfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162473.1408033-1458-20497680599328/AnsiballZ_copy.py'
Jan 23 10:01:13 compute-0 sudo[149765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:14 compute-0 python3.9[149767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162473.1408033-1458-20497680599328/.source.json _original_basename=.o87zparp follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:14 compute-0 sudo[149765]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:14.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:14 compute-0 ceph-mon[74335]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:14.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:14 compute-0 python3.9[149918]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:15 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:16.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:16.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:17 compute-0 ceph-mon[74335]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:17 compute-0 sudo[150341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lekpvcmajaournytjqmrmqgpcvxxhsad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162476.8599565-1578-4125139808638/AnsiballZ_container_config_data.py'
Jan 23 10:01:17 compute-0 sudo[150341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:17 compute-0 python3.9[150343]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 23 10:01:17 compute-0 sudo[150341]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:17 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:01:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:18 compute-0 sudo[150445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:01:18 compute-0 sudo[150445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:01:18 compute-0 sudo[150445]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:18 compute-0 ceph-osd[82641]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000087s
Jan 23 10:01:18 compute-0 sudo[150520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrlorevvsqqxdygdckznrhluigpzvnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162477.9677181-1611-131831324393878/AnsiballZ_container_config_hash.py'
Jan 23 10:01:18 compute-0 sudo[150520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:01:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:18.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:01:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:18.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:18 compute-0 python3.9[150522]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 10:01:18 compute-0 sudo[150520]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:19 compute-0 sudo[150673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvsmkgvqolaefkwwvmbgvdcygprgpkmj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769162479.2468183-1641-180298263266972/AnsiballZ_edpm_container_manage.py'
Jan 23 10:01:19 compute-0 sudo[150673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:01:19
Jan 23 10:01:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:01:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:01:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.nfs', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'images', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 23 10:01:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:01:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:19 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:19 compute-0 ceph-mon[74335]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:01:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:19] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Jan 23 10:01:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:19] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Jan 23 10:01:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:01:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:01:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:01:20 compute-0 python3[150675]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 10:01:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:20.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:20.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:01:21 compute-0 ceph-mon[74335]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:21 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:22.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:22.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:22 compute-0 ceph-mon[74335]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:23 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 10:01:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:24.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 10:01:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:24.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:24 compute-0 ceph-mon[74335]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100125 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:01:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:25 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:26.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:26.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:27 compute-0 ceph-mon[74335]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:01:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:27 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:01:28 compute-0 podman[150689]: 2026-01-23 10:01:28.265545726 +0000 UTC m=+7.990582750 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Jan 23 10:01:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:28 compute-0 podman[150815]: 2026-01-23 10:01:28.385712019 +0000 UTC m=+0.022693821 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Jan 23 10:01:28 compute-0 podman[150815]: 2026-01-23 10:01:28.494303839 +0000 UTC m=+0.131285611 container create ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:01:28 compute-0 python3[150675]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Jan 23 10:01:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:28.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:28.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:28 compute-0 sudo[150673]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:29 compute-0 sudo[150999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyzvnsglmhlepgdjpcwothnzkquppojq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162489.0850928-1665-104455436458902/AnsiballZ_stat.py'
Jan 23 10:01:29 compute-0 sudo[150999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:29 compute-0 python3.9[151001]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:01:29 compute-0 sudo[150999]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:29 compute-0 ceph-mon[74335]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:01:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:29 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:29] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Jan 23 10:01:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:29] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Jan 23 10:01:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:01:30 compute-0 sudo[151155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zazqwjpmvmuvxubyrvqmxjdriqskaiib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162489.997031-1692-181729412963971/AnsiballZ_file.py'
Jan 23 10:01:30 compute-0 sudo[151155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4001360 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:30 compute-0 python3.9[151157]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:30 compute-0 sudo[151155]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e840046f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:30.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:30.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:30 compute-0 sudo[151231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lndmciefariysbtbuzxqbywojlbublby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162489.997031-1692-181729412963971/AnsiballZ_stat.py'
Jan 23 10:01:30 compute-0 sudo[151231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:30 compute-0 python3.9[151233]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:01:30 compute-0 ceph-mon[74335]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:01:30 compute-0 sudo[151231]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:31 compute-0 sudo[151382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmovqpmuaorcolzfyhaovkdgxfqyyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162491.0355442-1692-280397106474919/AnsiballZ_copy.py'
Jan 23 10:01:31 compute-0 sudo[151382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:31 compute-0 python3.9[151384]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769162491.0355442-1692-280397106474919/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:31 compute-0 sudo[151382]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:31 compute-0 sudo[151459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqyhcypjzglgrecsptuylmizaoyojaxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162491.0355442-1692-280397106474919/AnsiballZ_systemd.py'
Jan 23 10:01:31 compute-0 sudo[151459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:31 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:01:32 compute-0 python3.9[151461]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 10:01:32 compute-0 systemd[1]: Reloading.
Jan 23 10:01:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:32 compute-0 systemd-rc-local-generator[151491]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:01:32 compute-0 systemd-sysv-generator[151497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:01:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:32 compute-0 sudo[151459]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:32.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:32 compute-0 sudo[151574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rivpvpueziypqsbaenmpidrrejuduako ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162491.0355442-1692-280397106474919/AnsiballZ_systemd.py'
Jan 23 10:01:32 compute-0 sudo[151574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:33 compute-0 python3.9[151576]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:01:33 compute-0 systemd[1]: Reloading.
Jan 23 10:01:33 compute-0 systemd-sysv-generator[151610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:01:33 compute-0 systemd-rc-local-generator[151606]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:01:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:33 compute-0 systemd[1]: Starting ovn_controller container...
Jan 23 10:01:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:33 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:01:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42ef0523dc11947d0337c8489f20321b791f8fc891e68acd6e98eafaa7caa0d4/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 23 10:01:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:34.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:34.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:01:34 compute-0 ceph-mon[74335]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:01:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d.
Jan 23 10:01:35 compute-0 podman[151618]: 2026-01-23 10:01:35.144233223 +0000 UTC m=+1.395549649 container init ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 23 10:01:35 compute-0 ovn_controller[151634]: + sudo -E kolla_set_configs
Jan 23 10:01:35 compute-0 podman[151618]: 2026-01-23 10:01:35.173926964 +0000 UTC m=+1.425243370 container start ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 23 10:01:35 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 23 10:01:35 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 23 10:01:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 23 10:01:35 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 23 10:01:35 compute-0 systemd[151653]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 23 10:01:35 compute-0 systemd[151653]: Queued start job for default target Main User Target.
Jan 23 10:01:35 compute-0 systemd[151653]: Created slice User Application Slice.
Jan 23 10:01:35 compute-0 systemd[151653]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 23 10:01:35 compute-0 systemd[151653]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 10:01:35 compute-0 systemd[151653]: Reached target Paths.
Jan 23 10:01:35 compute-0 systemd[151653]: Reached target Timers.
Jan 23 10:01:35 compute-0 systemd[151653]: Starting D-Bus User Message Bus Socket...
Jan 23 10:01:35 compute-0 systemd[151653]: Starting Create User's Volatile Files and Directories...
Jan 23 10:01:35 compute-0 systemd[151653]: Finished Create User's Volatile Files and Directories.
Jan 23 10:01:35 compute-0 systemd[151653]: Listening on D-Bus User Message Bus Socket.
Jan 23 10:01:35 compute-0 systemd[151653]: Reached target Sockets.
Jan 23 10:01:35 compute-0 systemd[151653]: Reached target Basic System.
Jan 23 10:01:35 compute-0 systemd[151653]: Reached target Main User Target.
Jan 23 10:01:35 compute-0 systemd[151653]: Startup finished in 170ms.
Jan 23 10:01:35 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 23 10:01:35 compute-0 systemd[1]: Started Session c1 of User root.
Jan 23 10:01:35 compute-0 ovn_controller[151634]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 10:01:35 compute-0 ovn_controller[151634]: INFO:__main__:Validating config file
Jan 23 10:01:35 compute-0 ovn_controller[151634]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 10:01:35 compute-0 ovn_controller[151634]: INFO:__main__:Writing out command to execute
Jan 23 10:01:35 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 23 10:01:35 compute-0 ovn_controller[151634]: ++ cat /run_command
Jan 23 10:01:35 compute-0 ovn_controller[151634]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 23 10:01:35 compute-0 ovn_controller[151634]: + ARGS=
Jan 23 10:01:35 compute-0 ovn_controller[151634]: + sudo kolla_copy_cacerts
Jan 23 10:01:35 compute-0 systemd[1]: Started Session c2 of User root.
Jan 23 10:01:35 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 23 10:01:35 compute-0 ovn_controller[151634]: + [[ ! -n '' ]]
Jan 23 10:01:35 compute-0 ovn_controller[151634]: + . kolla_extend_start
Jan 23 10:01:35 compute-0 ovn_controller[151634]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 23 10:01:35 compute-0 ovn_controller[151634]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 23 10:01:35 compute-0 ovn_controller[151634]: + umask 0022
Jan 23 10:01:35 compute-0 ovn_controller[151634]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <info>  [1769162495.5835] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <info>  [1769162495.5845] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <warn>  [1769162495.5848] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <info>  [1769162495.5858] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 23 10:01:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <info>  [1769162495.5865] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 23 10:01:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <info>  [1769162495.5874] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 10:01:35 compute-0 kernel: br-int: entered promiscuous mode
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00024|main|INFO|OVS feature set changed, force recompute.
Jan 23 10:01:35 compute-0 systemd-udevd[151680]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:01:35 compute-0 edpm-start-podman-container[151618]: ovn_controller
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 10:01:35 compute-0 ovn_controller[151634]: 2026-01-23T10:01:35Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <info>  [1769162495.7013] manager: (ovn-eb059b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <info>  [1769162495.7021] manager: (ovn-170ec8-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 23 10:01:35 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 23 10:01:35 compute-0 systemd-udevd[151682]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <info>  [1769162495.7228] device (genev_sys_6081): carrier: link connected
Jan 23 10:01:35 compute-0 NetworkManager[48866]: <info>  [1769162495.7232] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Jan 23 10:01:35 compute-0 edpm-start-podman-container[151617]: Creating additional drop-in dependency for "ovn_controller" (ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d)
Jan 23 10:01:35 compute-0 systemd[1]: Reloading.
Jan 23 10:01:35 compute-0 podman[151641]: 2026-01-23 10:01:35.817188651 +0000 UTC m=+0.631752649 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:01:35 compute-0 systemd-rc-local-generator[151745]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:01:35 compute-0 systemd-sysv-generator[151750]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:01:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:35 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 170 B/s wr, 0 op/s
Jan 23 10:01:36 compute-0 systemd[1]: Started ovn_controller container.
Jan 23 10:01:36 compute-0 sudo[151574]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:36 compute-0 ceph-mon[74335]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:01:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:01:36 compute-0 NetworkManager[48866]: <info>  [1769162496.1834] manager: (ovn-8fb585-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 23 10:01:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:01:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:36.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:01:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:36.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:36 compute-0 ceph-mgr[74633]: [dashboard INFO request] [192.168.122.100:37646] [POST] [200] [0.004s] [4.0B] [89eb4624-f7cb-4845-b473-f57f0c7da3ec] /api/prometheus_receiver
Jan 23 10:01:37 compute-0 python3.9[151906]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 10:01:37 compute-0 ceph-mon[74335]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 170 B/s wr, 0 op/s
Jan 23 10:01:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:37 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:01:38 compute-0 sudo[152058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdtomclwmdtodmdhzdpzrkdjboafowrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162497.7456055-1827-76727667894283/AnsiballZ_stat.py'
Jan 23 10:01:38 compute-0 sudo[152058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:38 compute-0 sudo[152061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:01:38 compute-0 sudo[152061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:01:38 compute-0 sudo[152061]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:38 compute-0 python3.9[152060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:01:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:01:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:01:38 compute-0 sudo[152058]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:38.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:38.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:38 compute-0 sudo[152206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpccihxlplbiskevjuuzytcskxytxueq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162497.7456055-1827-76727667894283/AnsiballZ_copy.py'
Jan 23 10:01:38 compute-0 sudo[152206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:39 compute-0 python3.9[152208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162497.7456055-1827-76727667894283/.source.yaml _original_basename=.e86xncpw follow=False checksum=a80724acad465d51ee59522dfe4a3a5c05876d7d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:01:39 compute-0 sudo[152206]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:39 compute-0 ceph-mon[74335]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:01:39 compute-0 sudo[152358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxeodldmehacnvkhrfonmkoebkewupjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162499.393326-1872-15230724348444/AnsiballZ_command.py'
Jan 23 10:01:39 compute-0 sudo[152358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:39 compute-0 python3.9[152360]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:01:39 compute-0 ovs-vsctl[152362]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 23 10:01:39 compute-0 sudo[152358]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:39] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:01:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:39] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:01:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:39 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:01:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:40 compute-0 sudo[152513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otcgguhyegunuqvcegkxcadamdptecfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162500.094643-1896-149420258273270/AnsiballZ_command.py'
Jan 23 10:01:40 compute-0 sudo[152513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:40 compute-0 python3.9[152515]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:01:40 compute-0 ovs-vsctl[152517]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 23 10:01:40 compute-0 sudo[152513]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:40.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:40.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:41 compute-0 sudo[152668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvmjmfrnbrioaucsyauretesvxwtjnlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162501.1750038-1938-136924243648299/AnsiballZ_command.py'
Jan 23 10:01:41 compute-0 sudo[152668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:41 compute-0 ceph-mon[74335]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:01:41 compute-0 python3.9[152670]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:01:41 compute-0 ovs-vsctl[152672]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 23 10:01:41 compute-0 sudo[152668]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:41 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:01:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:01:42 compute-0 sshd-session[140177]: Connection closed by 192.168.122.30 port 58092
Jan 23 10:01:42 compute-0 sshd-session[140174]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:01:42 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 23 10:01:42 compute-0 systemd[1]: session-50.scope: Consumed 1min 1.601s CPU time.
Jan 23 10:01:42 compute-0 systemd-logind[784]: Session 50 logged out. Waiting for processes to exit.
Jan 23 10:01:42 compute-0 systemd-logind[784]: Removed session 50.
Jan 23 10:01:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:42.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:42.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:43 compute-0 ceph-mon[74335]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:01:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:43 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:01:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:44.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:44.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:44 compute-0 ceph-mon[74335]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:01:45 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 23 10:01:45 compute-0 systemd[151653]: Activating special unit Exit the Session...
Jan 23 10:01:45 compute-0 systemd[151653]: Stopped target Main User Target.
Jan 23 10:01:45 compute-0 systemd[151653]: Stopped target Basic System.
Jan 23 10:01:45 compute-0 systemd[151653]: Stopped target Paths.
Jan 23 10:01:45 compute-0 systemd[151653]: Stopped target Sockets.
Jan 23 10:01:45 compute-0 systemd[151653]: Stopped target Timers.
Jan 23 10:01:45 compute-0 systemd[151653]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 10:01:45 compute-0 systemd[151653]: Closed D-Bus User Message Bus Socket.
Jan 23 10:01:45 compute-0 systemd[151653]: Stopped Create User's Volatile Files and Directories.
Jan 23 10:01:45 compute-0 systemd[151653]: Removed slice User Application Slice.
Jan 23 10:01:45 compute-0 systemd[151653]: Reached target Shutdown.
Jan 23 10:01:45 compute-0 systemd[151653]: Finished Exit the Session.
Jan 23 10:01:45 compute-0 systemd[151653]: Reached target Exit the Session.
Jan 23 10:01:45 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 23 10:01:45 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 23 10:01:45 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 23 10:01:45 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 23 10:01:45 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 23 10:01:45 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 23 10:01:45 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 23 10:01:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:45 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:01:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:46.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:01:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:46.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:01:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:01:46.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:01:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100147 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:01:47 compute-0 ceph-mon[74335]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:01:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:47 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 3 op/s
Jan 23 10:01:48 compute-0 sshd-session[152705]: Accepted publickey for zuul from 192.168.122.30 port 55798 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:01:48 compute-0 systemd-logind[784]: New session 52 of user zuul.
Jan 23 10:01:48 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 23 10:01:48 compute-0 sshd-session[152705]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:01:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:48.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:48.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:49 compute-0 ceph-mon[74335]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 3 op/s
Jan 23 10:01:49 compute-0 python3.9[152858]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:01:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:49] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:01:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:49] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:01:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:49 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:01:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:01:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:01:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:01:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:01:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:01:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:01:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:01:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 10:01:50 compute-0 sudo[153014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scdnszuxitazocsydjrxktxpvfmaqjau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162509.8671463-57-42465578461052/AnsiballZ_file.py'
Jan 23 10:01:50 compute-0 sudo[153014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:01:50 compute-0 python3.9[153016]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:01:50 compute-0 sudo[153014]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:50.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:50.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:50 compute-0 sudo[153166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eslhrniikmzqnctymvdqmltogdpzuqhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162510.6844373-57-47991715188216/AnsiballZ_file.py'
Jan 23 10:01:50 compute-0 sudo[153166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:51 compute-0 python3.9[153168]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:01:51 compute-0 sudo[153166]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:51 compute-0 ceph-mon[74335]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 10:01:51 compute-0 sudo[153319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlvvogzauojezmdgpadkizywtzagmfzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162511.4961054-57-8068994089564/AnsiballZ_file.py'
Jan 23 10:01:51 compute-0 sudo[153319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:51 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:52 compute-0 python3.9[153321]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:01:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 10:01:52 compute-0 sudo[153319]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:52 compute-0 sudo[153472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgsadcygnliwkemvsolnzrxzyecwxnrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162512.2755733-57-12468562465892/AnsiballZ_file.py'
Jan 23 10:01:52 compute-0 sudo[153472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:52.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:52.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:52 compute-0 python3.9[153474]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:01:52 compute-0 sudo[153472]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:53 compute-0 sudo[153625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gprzqarwknljsvqazmxtgjvnoweejxnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162512.9539032-57-59709819606605/AnsiballZ_file.py'
Jan 23 10:01:53 compute-0 sudo[153625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:53 compute-0 python3.9[153627]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:01:53 compute-0 sudo[153625]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:53 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:01:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:01:54 compute-0 ceph-mon[74335]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 425 B/s wr, 1 op/s
Jan 23 10:01:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:54 compute-0 python3.9[153780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:01:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:54.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:54.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:55 compute-0 sudo[153930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwhqnrxkdhuiqujjlxnxumadrlqgjsey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162514.7523208-189-228590876460238/AnsiballZ_seboolean.py'
Jan 23 10:01:55 compute-0 sudo[153930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:01:55 compute-0 python3.9[153932]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 23 10:01:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:55 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:01:56 compute-0 sudo[153930]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:56.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:56.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:01:56.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:01:57 compute-0 python3.9[154084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:01:57 compute-0 ceph-mon[74335]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:01:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:57 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:01:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:58 compute-0 python3.9[154208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769162516.3110933-213-27657248463165/.source follow=False _original_basename=haproxy.j2 checksum=1daf285be4abb25cbd7ba376734de140aac9aefe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:01:58 compute-0 sudo[154210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:01:58 compute-0 sudo[154210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:01:58 compute-0 sudo[154210]: pam_unix(sudo:session): session closed for user root
Jan 23 10:01:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:01:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:01:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:58.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:01:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:01:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:01:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:58.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:01:59 compute-0 python3.9[154392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:01:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:59] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:01:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:01:59] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:01:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:01:59 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c650 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:00 compute-0 python3.9[154514]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769162518.6011045-258-260310360337113/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:00 compute-0 ceph-mon[74335]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:02:00 compute-0 ceph-mon[74335]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:02:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:02:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:00.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:02:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:00.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:00 compute-0 sudo[154665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zghirtxbqcktkptlevuqsxkdrefzcmsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162520.45412-309-191618485363114/AnsiballZ_setup.py'
Jan 23 10:02:00 compute-0 sudo[154665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:01 compute-0 python3.9[154667]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 10:02:01 compute-0 sudo[154665]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:01 compute-0 ceph-mon[74335]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:01 compute-0 sudo[154750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdronzaugelyuyxcoxjonffiqrciirnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162520.45412-309-191618485363114/AnsiballZ_dnf.py'
Jan 23 10:02:01 compute-0 sudo[154750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:02 compute-0 python3.9[154752]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 10:02:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100202 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:02:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea400adf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c00c670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:02.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:02:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:02.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:02:03 compute-0 ceph-mon[74335]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:03 compute-0 sudo[154750]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:04 compute-0 ceph-mon[74335]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:04.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:04 compute-0 sudo[154908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpnfkkxpvglhkzhivmgbrzmiornnblxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162524.0892544-345-199035232405936/AnsiballZ_systemd.py'
Jan 23 10:02:04 compute-0 sudo[154908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:04.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:04 compute-0 sudo[154911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:02:04 compute-0 sudo[154911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:04 compute-0 sudo[154911]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:04 compute-0 sudo[154936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:02:04 compute-0 sudo[154936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:04 compute-0 python3.9[154910]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 10:02:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:02:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:02:05 compute-0 sudo[154908]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:05 compute-0 sudo[154936]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:02:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:02:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:02:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:02:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:02:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:02:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:02:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:02:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:02:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:02:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:02:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:02:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:02:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:02:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:02:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:02:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:02:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:02:05 compute-0 sudo[155145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:02:05 compute-0 sudo[155145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:05 compute-0 sudo[155145]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:05 compute-0 sudo[155171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:02:05 compute-0 sudo[155171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:05 compute-0 python3.9[155144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:06 compute-0 podman[155360]: 2026-01-23 10:02:06.227851008 +0000 UTC m=+0.056034389 container create 8fa9562e7aae1ca69bccbd7751fa97338862472c6821c57ea46f6a1656e20d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:02:06 compute-0 systemd[1]: Started libpod-conmon-8fa9562e7aae1ca69bccbd7751fa97338862472c6821c57ea46f6a1656e20d89.scope.
Jan 23 10:02:06 compute-0 podman[155360]: 2026-01-23 10:02:06.194432835 +0000 UTC m=+0.022616166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:02:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:02:06 compute-0 podman[155360]: 2026-01-23 10:02:06.335133659 +0000 UTC m=+0.163317000 container init 8fa9562e7aae1ca69bccbd7751fa97338862472c6821c57ea46f6a1656e20d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_torvalds, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 10:02:06 compute-0 podman[155360]: 2026-01-23 10:02:06.347489553 +0000 UTC m=+0.175672874 container start 8fa9562e7aae1ca69bccbd7751fa97338862472c6821c57ea46f6a1656e20d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_torvalds, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 10:02:06 compute-0 podman[155360]: 2026-01-23 10:02:06.35202644 +0000 UTC m=+0.180209761 container attach 8fa9562e7aae1ca69bccbd7751fa97338862472c6821c57ea46f6a1656e20d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:02:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84003050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:06 compute-0 ovn_controller[151634]: 2026-01-23T10:02:06Z|00025|memory|INFO|16128 kB peak resident set size after 30.8 seconds
Jan 23 10:02:06 compute-0 ovn_controller[151634]: 2026-01-23T10:02:06Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 23 10:02:06 compute-0 brave_torvalds[155375]: 167 167
Jan 23 10:02:06 compute-0 python3.9[155355]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769162525.302168-369-154519962426622/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:06 compute-0 systemd[1]: libpod-8fa9562e7aae1ca69bccbd7751fa97338862472c6821c57ea46f6a1656e20d89.scope: Deactivated successfully.
Jan 23 10:02:06 compute-0 podman[155360]: 2026-01-23 10:02:06.364981013 +0000 UTC m=+0.193164344 container died 8fa9562e7aae1ca69bccbd7751fa97338862472c6821c57ea46f6a1656e20d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Jan 23 10:02:06 compute-0 podman[155372]: 2026-01-23 10:02:06.399158828 +0000 UTC m=+0.117565173 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7a3f5e859557ac77fb3ea9a985e76939156597cf5aba6441e7222af9bd10d6b-merged.mount: Deactivated successfully.
Jan 23 10:02:06 compute-0 podman[155360]: 2026-01-23 10:02:06.438750698 +0000 UTC m=+0.266934019 container remove 8fa9562e7aae1ca69bccbd7751fa97338862472c6821c57ea46f6a1656e20d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:02:06 compute-0 systemd[1]: libpod-conmon-8fa9562e7aae1ca69bccbd7751fa97338862472c6821c57ea46f6a1656e20d89.scope: Deactivated successfully.
Jan 23 10:02:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:02:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:06.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:02:06 compute-0 podman[155474]: 2026-01-23 10:02:06.601006674 +0000 UTC m=+0.030803924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:02:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:06.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:02:06.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:02:07 compute-0 python3.9[155590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:07 compute-0 podman[155474]: 2026-01-23 10:02:07.102446017 +0000 UTC m=+0.532243247 container create ef9205fced8191badfe31bcda6f179469d23962934f9c15508fc1419c018b806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:02:07 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:02:07 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:02:07 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:02:07 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:02:07 compute-0 ceph-mon[74335]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:07 compute-0 systemd[1]: Started libpod-conmon-ef9205fced8191badfe31bcda6f179469d23962934f9c15508fc1419c018b806.scope.
Jan 23 10:02:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b6772c9806f75fa2af147090657c014e70935e474f81d858237267888a483b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b6772c9806f75fa2af147090657c014e70935e474f81d858237267888a483b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b6772c9806f75fa2af147090657c014e70935e474f81d858237267888a483b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b6772c9806f75fa2af147090657c014e70935e474f81d858237267888a483b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b6772c9806f75fa2af147090657c014e70935e474f81d858237267888a483b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:07 compute-0 podman[155474]: 2026-01-23 10:02:07.219798102 +0000 UTC m=+0.649595352 container init ef9205fced8191badfe31bcda6f179469d23962934f9c15508fc1419c018b806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 10:02:07 compute-0 podman[155474]: 2026-01-23 10:02:07.229685112 +0000 UTC m=+0.659482342 container start ef9205fced8191badfe31bcda6f179469d23962934f9c15508fc1419c018b806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_merkle, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:02:07 compute-0 podman[155474]: 2026-01-23 10:02:07.235131577 +0000 UTC m=+0.664928807 container attach ef9205fced8191badfe31bcda6f179469d23962934f9c15508fc1419c018b806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 23 10:02:07 compute-0 python3.9[155718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769162526.557015-369-137117962434556/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:07 compute-0 unruffled_merkle[155638]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:02:07 compute-0 unruffled_merkle[155638]: --> All data devices are unavailable
Jan 23 10:02:07 compute-0 systemd[1]: libpod-ef9205fced8191badfe31bcda6f179469d23962934f9c15508fc1419c018b806.scope: Deactivated successfully.
Jan 23 10:02:07 compute-0 podman[155474]: 2026-01-23 10:02:07.615523552 +0000 UTC m=+1.045320802 container died ef9205fced8191badfe31bcda6f179469d23962934f9c15508fc1419c018b806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_merkle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 10:02:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0b6772c9806f75fa2af147090657c014e70935e474f81d858237267888a483b-merged.mount: Deactivated successfully.
Jan 23 10:02:07 compute-0 podman[155474]: 2026-01-23 10:02:07.663841806 +0000 UTC m=+1.093639036 container remove ef9205fced8191badfe31bcda6f179469d23962934f9c15508fc1419c018b806 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_merkle, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:02:07 compute-0 systemd[1]: libpod-conmon-ef9205fced8191badfe31bcda6f179469d23962934f9c15508fc1419c018b806.scope: Deactivated successfully.
Jan 23 10:02:07 compute-0 sudo[155171]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:07 compute-0 sudo[155767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:02:07 compute-0 sudo[155767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:07 compute-0 sudo[155767]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:07 compute-0 sudo[155792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:02:07 compute-0 sudo[155792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:02:08 compute-0 podman[155858]: 2026-01-23 10:02:08.250314905 +0000 UTC m=+0.046665044 container create 39b2c924a226e8dc4a3896d1e6c38a1dda464b266a03bf78a1a2da2729306415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_solomon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:02:08 compute-0 systemd[1]: Started libpod-conmon-39b2c924a226e8dc4a3896d1e6c38a1dda464b266a03bf78a1a2da2729306415.scope.
Jan 23 10:02:08 compute-0 podman[155858]: 2026-01-23 10:02:08.231503796 +0000 UTC m=+0.027853855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:02:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:02:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:08 compute-0 podman[155858]: 2026-01-23 10:02:08.43199509 +0000 UTC m=+0.228345149 container init 39b2c924a226e8dc4a3896d1e6c38a1dda464b266a03bf78a1a2da2729306415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 10:02:08 compute-0 podman[155858]: 2026-01-23 10:02:08.439530208 +0000 UTC m=+0.235880247 container start 39b2c924a226e8dc4a3896d1e6c38a1dda464b266a03bf78a1a2da2729306415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:02:08 compute-0 podman[155858]: 2026-01-23 10:02:08.44354979 +0000 UTC m=+0.239899829 container attach 39b2c924a226e8dc4a3896d1e6c38a1dda464b266a03bf78a1a2da2729306415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 10:02:08 compute-0 clever_solomon[155897]: 167 167
Jan 23 10:02:08 compute-0 systemd[1]: libpod-39b2c924a226e8dc4a3896d1e6c38a1dda464b266a03bf78a1a2da2729306415.scope: Deactivated successfully.
Jan 23 10:02:08 compute-0 podman[155858]: 2026-01-23 10:02:08.447280623 +0000 UTC m=+0.243630672 container died 39b2c924a226e8dc4a3896d1e6c38a1dda464b266a03bf78a1a2da2729306415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_solomon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 10:02:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-910dd665335f28443683563a6ecf509e092cba9f859fcc131e919526753260c3-merged.mount: Deactivated successfully.
Jan 23 10:02:08 compute-0 podman[155858]: 2026-01-23 10:02:08.48349311 +0000 UTC m=+0.279843139 container remove 39b2c924a226e8dc4a3896d1e6c38a1dda464b266a03bf78a1a2da2729306415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 10:02:08 compute-0 systemd[1]: libpod-conmon-39b2c924a226e8dc4a3896d1e6c38a1dda464b266a03bf78a1a2da2729306415.scope: Deactivated successfully.
Jan 23 10:02:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84003050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:08 compute-0 podman[156024]: 2026-01-23 10:02:08.656020768 +0000 UTC m=+0.044421127 container create 2937aa3b1e4389070620c8a812ec65cd44654a171861db61a59ec09f544717c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:02:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:08 compute-0 systemd[1]: Started libpod-conmon-2937aa3b1e4389070620c8a812ec65cd44654a171861db61a59ec09f544717c8.scope.
Jan 23 10:02:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:08.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:02:08 compute-0 podman[156024]: 2026-01-23 10:02:08.635341541 +0000 UTC m=+0.023741930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6559e81b398feba74bbb5184569b335cbca6a65f131be73527001478cce56f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6559e81b398feba74bbb5184569b335cbca6a65f131be73527001478cce56f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6559e81b398feba74bbb5184569b335cbca6a65f131be73527001478cce56f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6559e81b398feba74bbb5184569b335cbca6a65f131be73527001478cce56f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:08 compute-0 podman[156024]: 2026-01-23 10:02:08.74950002 +0000 UTC m=+0.137900409 container init 2937aa3b1e4389070620c8a812ec65cd44654a171861db61a59ec09f544717c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:02:08 compute-0 podman[156024]: 2026-01-23 10:02:08.75842613 +0000 UTC m=+0.146826489 container start 2937aa3b1e4389070620c8a812ec65cd44654a171861db61a59ec09f544717c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 10:02:08 compute-0 podman[156024]: 2026-01-23 10:02:08.763461433 +0000 UTC m=+0.151861812 container attach 2937aa3b1e4389070620c8a812ec65cd44654a171861db61a59ec09f544717c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jepsen, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:02:08 compute-0 python3.9[156019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]: {
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:     "1": [
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:         {
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "devices": [
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "/dev/loop3"
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             ],
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "lv_name": "ceph_lv0",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "lv_size": "21470642176",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "name": "ceph_lv0",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "tags": {
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.cluster_name": "ceph",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.crush_device_class": "",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.encrypted": "0",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.osd_id": "1",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.type": "block",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.vdo": "0",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:                 "ceph.with_tpm": "0"
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             },
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "type": "block",
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:             "vg_name": "ceph_vg0"
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:         }
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]:     ]
Jan 23 10:02:09 compute-0 exciting_jepsen[156041]: }
Jan 23 10:02:09 compute-0 systemd[1]: libpod-2937aa3b1e4389070620c8a812ec65cd44654a171861db61a59ec09f544717c8.scope: Deactivated successfully.
Jan 23 10:02:09 compute-0 podman[156024]: 2026-01-23 10:02:09.131473083 +0000 UTC m=+0.519873462 container died 2937aa3b1e4389070620c8a812ec65cd44654a171861db61a59ec09f544717c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jepsen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:02:09 compute-0 ceph-mon[74335]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:02:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6559e81b398feba74bbb5184569b335cbca6a65f131be73527001478cce56f9-merged.mount: Deactivated successfully.
Jan 23 10:02:09 compute-0 podman[156024]: 2026-01-23 10:02:09.175769265 +0000 UTC m=+0.564169624 container remove 2937aa3b1e4389070620c8a812ec65cd44654a171861db61a59ec09f544717c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jepsen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 10:02:09 compute-0 systemd[1]: libpod-conmon-2937aa3b1e4389070620c8a812ec65cd44654a171861db61a59ec09f544717c8.scope: Deactivated successfully.
Jan 23 10:02:09 compute-0 sudo[155792]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:09 compute-0 sudo[156183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:02:09 compute-0 sudo[156183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:09 compute-0 sudo[156183]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:09 compute-0 python3.9[156170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769162528.3210478-501-269053113785696/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:09 compute-0 sudo[156208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:02:09 compute-0 sudo[156208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:09 compute-0 podman[156418]: 2026-01-23 10:02:09.810195016 +0000 UTC m=+0.053277685 container create 50b70352b32273be57b23660874ff5032257fd50b5694089bc601604baf949c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:02:09 compute-0 systemd[1]: Started libpod-conmon-50b70352b32273be57b23660874ff5032257fd50b5694089bc601604baf949c8.scope.
Jan 23 10:02:09 compute-0 podman[156418]: 2026-01-23 10:02:09.782694363 +0000 UTC m=+0.025777052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:02:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:02:09 compute-0 python3.9[156423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:09] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:02:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:09] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:02:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:02:10 compute-0 podman[156418]: 2026-01-23 10:02:10.306966248 +0000 UTC m=+0.550048937 container init 50b70352b32273be57b23660874ff5032257fd50b5694089bc601604baf949c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_albattani, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:02:10 compute-0 podman[156418]: 2026-01-23 10:02:10.315585259 +0000 UTC m=+0.558667928 container start 50b70352b32273be57b23660874ff5032257fd50b5694089bc601604baf949c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:02:10 compute-0 fervent_albattani[156438]: 167 167
Jan 23 10:02:10 compute-0 systemd[1]: libpod-50b70352b32273be57b23660874ff5032257fd50b5694089bc601604baf949c8.scope: Deactivated successfully.
Jan 23 10:02:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:10 compute-0 python3.9[156562]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769162529.493951-501-130650465319557/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:02:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:10.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:02:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:10.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:10 compute-0 podman[156418]: 2026-01-23 10:02:10.97207307 +0000 UTC m=+1.215155739 container attach 50b70352b32273be57b23660874ff5032257fd50b5694089bc601604baf949c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_albattani, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 10:02:10 compute-0 podman[156418]: 2026-01-23 10:02:10.973264656 +0000 UTC m=+1.216347325 container died 50b70352b32273be57b23660874ff5032257fd50b5694089bc601604baf949c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:02:11 compute-0 python3.9[156724]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:02:11 compute-0 ceph-mon[74335]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:02:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:11 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:02:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-30f5987acb1c999f52299d828943ee11cb85f41fb5eadad536422979bfc6df7d-merged.mount: Deactivated successfully.
Jan 23 10:02:11 compute-0 sudo[156878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsjgdzfakbwgbqhbpebelpimuooaehxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162531.5229723-615-200897092452527/AnsiballZ_file.py'
Jan 23 10:02:11 compute-0 sudo[156878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:11 compute-0 podman[156418]: 2026-01-23 10:02:11.928535059 +0000 UTC m=+2.171617728 container remove 50b70352b32273be57b23660874ff5032257fd50b5694089bc601604baf949c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_albattani, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:02:11 compute-0 systemd[1]: libpod-conmon-50b70352b32273be57b23660874ff5032257fd50b5694089bc601604baf949c8.scope: Deactivated successfully.
Jan 23 10:02:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84003050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:12 compute-0 python3.9[156880]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:12 compute-0 sudo[156878]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:12 compute-0 podman[156888]: 2026-01-23 10:02:12.111959577 +0000 UTC m=+0.051644986 container create 1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_albattani, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:02:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:02:12 compute-0 systemd[1]: Started libpod-conmon-1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8.scope.
Jan 23 10:02:12 compute-0 podman[156888]: 2026-01-23 10:02:12.089054773 +0000 UTC m=+0.028740172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:02:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9f507410fdc374ff8323f47db6dcb4cd48b1fbb94170bf1584f3b0d8971675/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9f507410fdc374ff8323f47db6dcb4cd48b1fbb94170bf1584f3b0d8971675/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9f507410fdc374ff8323f47db6dcb4cd48b1fbb94170bf1584f3b0d8971675/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9f507410fdc374ff8323f47db6dcb4cd48b1fbb94170bf1584f3b0d8971675/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:12 compute-0 podman[156888]: 2026-01-23 10:02:12.216779193 +0000 UTC m=+0.156464612 container init 1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_albattani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:02:12 compute-0 podman[156888]: 2026-01-23 10:02:12.224490896 +0000 UTC m=+0.164176295 container start 1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:02:12 compute-0 podman[156888]: 2026-01-23 10:02:12.229310073 +0000 UTC m=+0.168995582 container attach 1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_albattani, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:02:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:12 compute-0 sudo[157072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izgbqkoqrcdukojdjqskmuiixamecwcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162532.2459028-639-194356545770754/AnsiballZ_stat.py'
Jan 23 10:02:12 compute-0 sudo[157072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:12.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:12 compute-0 python3.9[157076]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:12.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:12 compute-0 sudo[157072]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:12 compute-0 lvm[157206]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:02:12 compute-0 lvm[157206]: VG ceph_vg0 finished
Jan 23 10:02:12 compute-0 sudo[157209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkqkdcauekjiuhbvegiridwnzivgmbbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162532.2459028-639-194356545770754/AnsiballZ_file.py'
Jan 23 10:02:12 compute-0 sudo[157209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:13 compute-0 heuristic_albattani[156930]: {}
Jan 23 10:02:13 compute-0 systemd[1]: libpod-1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8.scope: Deactivated successfully.
Jan 23 10:02:13 compute-0 systemd[1]: libpod-1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8.scope: Consumed 1.302s CPU time.
Jan 23 10:02:13 compute-0 podman[156888]: 2026-01-23 10:02:13.061915459 +0000 UTC m=+1.001600878 container died 1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_albattani, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd9f507410fdc374ff8323f47db6dcb4cd48b1fbb94170bf1584f3b0d8971675-merged.mount: Deactivated successfully.
Jan 23 10:02:13 compute-0 podman[156888]: 2026-01-23 10:02:13.119477322 +0000 UTC m=+1.059162721 container remove 1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_albattani, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 10:02:13 compute-0 systemd[1]: libpod-conmon-1a87ae7a1edbfa409ca44af9964e4ceb71291654403b68c7e4f121de0bac7cf8.scope: Deactivated successfully.
Jan 23 10:02:13 compute-0 sudo[156208]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:02:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:02:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:02:13 compute-0 python3.9[157212]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:02:13 compute-0 sudo[157209]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:13 compute-0 sudo[157230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:02:13 compute-0 sudo[157230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:13 compute-0 sudo[157230]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:13 compute-0 ceph-mon[74335]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:02:13 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:02:13 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:02:13 compute-0 sudo[157404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxipwxnylrcpazksalqlbweaqjcqtltz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162533.350651-639-108411457158995/AnsiballZ_stat.py'
Jan 23 10:02:13 compute-0 sudo[157404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:13 compute-0 python3.9[157406]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:13 compute-0 sudo[157404]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:14 compute-0 sudo[157483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxggmivbaeewkdbmtsefcuocdsunpukw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162533.350651-639-108411457158995/AnsiballZ_file.py'
Jan 23 10:02:14 compute-0 sudo[157483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:02:14 compute-0 python3.9[157485]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:14 compute-0 sudo[157483]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84003050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:14 compute-0 ceph-mon[74335]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:02:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84003050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:14.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:14.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:14 compute-0 sudo[157636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-texcdhvgjyxaykvccpceeuabuoazrsvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162534.4510748-708-2642859805828/AnsiballZ_file.py'
Jan 23 10:02:14 compute-0 sudo[157636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:02:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:02:14 compute-0 python3.9[157638]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:15 compute-0 sudo[157636]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:15 compute-0 sudo[157788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mceejsqsehgkbwkwrgbwlwbojfihsnux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162535.1553502-732-189245152413280/AnsiballZ_stat.py'
Jan 23 10:02:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:15 compute-0 sudo[157788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:15 compute-0 python3.9[157790]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:15 compute-0 sudo[157788]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:15 compute-0 sudo[157867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgtgtcsrpdkiaawaoicdbqibgssxgym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162535.1553502-732-189245152413280/AnsiballZ_file.py'
Jan 23 10:02:15 compute-0 sudo[157867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Jan 23 10:02:16 compute-0 python3.9[157869]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:16 compute-0 sudo[157867]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:16 compute-0 sudo[158020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnoackvswceaczfwkdummmuhylarfqzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162536.317248-768-122629680168337/AnsiballZ_stat.py'
Jan 23 10:02:16 compute-0 sudo[158020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:16.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:16.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:16 compute-0 python3.9[158022]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:16 compute-0 sudo[158020]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:02:16.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:02:17 compute-0 ceph-mon[74335]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Jan 23 10:02:17 compute-0 sudo[158098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izsdlnttrjyfiwwwvbfthyscxphwlzfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162536.317248-768-122629680168337/AnsiballZ_file.py'
Jan 23 10:02:17 compute-0 sudo[158098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:17 compute-0 python3.9[158100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:17 compute-0 sudo[158098]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:17 compute-0 sudo[158251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeswalhjaetkarpvnaujiysezzgtmwjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162537.4864323-804-235263736608011/AnsiballZ_systemd.py'
Jan 23 10:02:17 compute-0 sudo[158251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:17 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:02:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:18 compute-0 python3.9[158253]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:02:18 compute-0 systemd[1]: Reloading.
Jan 23 10:02:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:02:18 compute-0 systemd-rc-local-generator[158279]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:02:18 compute-0 systemd-sysv-generator[158283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:02:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:18 compute-0 sudo[158251]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:18 compute-0 sudo[158316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:02:18 compute-0 sudo[158316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:18 compute-0 sudo[158316]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:18.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:18.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:19 compute-0 sudo[158466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfaynphuylyfeugtficqqnwjbyjtcrem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162538.7407944-828-77387844899392/AnsiballZ_stat.py'
Jan 23 10:02:19 compute-0 sudo[158466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:19 compute-0 python3.9[158468]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:19 compute-0 sudo[158466]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:19 compute-0 ceph-mon[74335]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:02:19 compute-0 sudo[158544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulslfxwihxpfkjsoznmlcsxzdtnjecku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162538.7407944-828-77387844899392/AnsiballZ_file.py'
Jan 23 10:02:19 compute-0 sudo[158544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:19 compute-0 python3.9[158546]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:19 compute-0 sudo[158544]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:02:19
Jan 23 10:02:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:02:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:02:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.nfs', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control']
Jan 23 10:02:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:02:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:19] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:02:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:19] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Jan 23 10:02:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:02:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:02:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84003050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:02:20 compute-0 sudo[158698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yywyxupqpncdwrpnhybidopzrutvreqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162539.9373133-864-279071190328132/AnsiballZ_stat.py'
Jan 23 10:02:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:02:20 compute-0 sudo[158698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:20 compute-0 python3.9[158700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:20 compute-0 sudo[158698]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:02:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:20 compute-0 sudo[158776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjklvwrwkbitmuehaalyugqhrsnzcztp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162539.9373133-864-279071190328132/AnsiballZ_file.py'
Jan 23 10:02:20 compute-0 sudo[158776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:20.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:20.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:20 compute-0 python3.9[158778]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:20 compute-0 sudo[158776]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:21 compute-0 sudo[158928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvrentbhocokatyyyhtvihwpjhlggmdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162541.0776403-900-52303207873614/AnsiballZ_systemd.py'
Jan 23 10:02:21 compute-0 sudo[158928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:21 compute-0 ceph-mon[74335]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:02:21 compute-0 python3.9[158930]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:02:21 compute-0 systemd[1]: Reloading.
Jan 23 10:02:21 compute-0 systemd-rc-local-generator[158957]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:02:21 compute-0 systemd-sysv-generator[158960]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:02:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:22 compute-0 systemd[1]: Starting Create netns directory...
Jan 23 10:02:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 10:02:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 10:02:22 compute-0 systemd[1]: Finished Create netns directory.
Jan 23 10:02:22 compute-0 sudo[158928]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:02:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84003050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:22.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:22 compute-0 sudo[159123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzaiiqjfojofrkfygwkaqraepqxznwjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162542.4482331-930-158928218740307/AnsiballZ_file.py'
Jan 23 10:02:22 compute-0 sudo[159123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:02:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:22.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:02:23 compute-0 python3.9[159125]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:23 compute-0 sudo[159123]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:23 compute-0 ceph-mon[74335]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:02:23 compute-0 sudo[159275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfbcxpohgbxqfjoawlfgwbvzbedbrkrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162543.3059113-954-270187811383419/AnsiballZ_stat.py'
Jan 23 10:02:23 compute-0 sudo[159275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:23 compute-0 python3.9[159277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:23 compute-0 sudo[159275]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100224 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:02:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:02:24 compute-0 sudo[159400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlyayovbjhhgxewuuwhlbdaqcxpcdweo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162543.3059113-954-270187811383419/AnsiballZ_copy.py'
Jan 23 10:02:24 compute-0 sudo[159400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:24 compute-0 python3.9[159402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769162543.3059113-954-270187811383419/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:24 compute-0 sudo[159400]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:24.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:25 compute-0 sudo[159552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqtrtvfnriypyodsqdodshzrvmmaqbkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162544.8599675-1005-260280667534208/AnsiballZ_file.py'
Jan 23 10:02:25 compute-0 sudo[159552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:25 compute-0 python3.9[159554]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:25 compute-0 sudo[159552]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:25 compute-0 ceph-mon[74335]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:02:25 compute-0 sudo[159705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldjdrwcvcbrqeuyohwvbxknylhdylnfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162545.579411-1029-106957372185831/AnsiballZ_file.py'
Jan 23 10:02:25 compute-0 sudo[159705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:02:26 compute-0 python3.9[159707]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:02:26 compute-0 sudo[159705]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:26 compute-0 sudo[159858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrvbmtjtzxvpwopyljvnfpzftiqrbwho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162546.3421128-1053-7927322398208/AnsiballZ_stat.py'
Jan 23 10:02:26 compute-0 sudo[159858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:26.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:26.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:26 compute-0 python3.9[159860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:02:26 compute-0 sudo[159858]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:02:26.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:02:27 compute-0 sudo[159981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxhkyicomdierqsxzqdrefchicquezia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162546.3421128-1053-7927322398208/AnsiballZ_copy.py'
Jan 23 10:02:27 compute-0 sudo[159981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:27 compute-0 ceph-mon[74335]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:02:27 compute-0 python3.9[159983]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162546.3421128-1053-7927322398208/.source.json _original_basename=.8_41qwa1 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:27 compute-0 sudo[159981]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e84004930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Jan 23 10:02:28 compute-0 python3.9[160134]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:28.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:28.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:29 compute-0 ceph-mon[74335]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Jan 23 10:02:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:29] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:02:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:29] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:02:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:02:30 compute-0 sudo[160558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qachimkrttwiilmsnubmosxytsyfisbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162549.7225935-1173-242027743004583/AnsiballZ_container_config_data.py'
Jan 23 10:02:30 compute-0 sudo[160558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:30 compute-0 python3.9[160560]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 23 10:02:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:30 compute-0 sudo[160558]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:30.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:30.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:30 compute-0 ceph-mon[74335]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:02:31 compute-0 sudo[160710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhbubhhceglxkuzhwhvjaehvrvbtdvuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162550.8560662-1206-30164021769263/AnsiballZ_container_config_hash.py'
Jan 23 10:02:31 compute-0 sudo[160710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:31 compute-0 python3.9[160712]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 10:02:31 compute-0 sudo[160710]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:02:32 compute-0 sudo[160864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xphcwvslyzcvxawwtkqhzqkiqenecyzo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769162551.848592-1236-158609236916126/AnsiballZ_edpm_container_manage.py'
Jan 23 10:02:32 compute-0 sudo[160864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:32.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:32 compute-0 python3[160866]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 10:02:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:32.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:34 compute-0 ceph-mon[74335]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:02:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:02:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:34.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:02:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:34.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:02:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:02:35 compute-0 ceph-mon[74335]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:02:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:02:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 23 10:02:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4009350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:36.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:36.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:02:36.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:02:37 compute-0 ceph-mon[74335]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 23 10:02:37 compute-0 podman[160936]: 2026-01-23 10:02:37.282958233 +0000 UTC m=+0.802235578 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 10:02:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s
Jan 23 10:02:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:38 compute-0 sudo[160982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:02:38 compute-0 sudo[160982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:38 compute-0 sudo[160982]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:02:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:38.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:02:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:38.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100239 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:02:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:39] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Jan 23 10:02:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:39] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Jan 23 10:02:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4009350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s
Jan 23 10:02:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:40 compute-0 ceph-mon[74335]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s
Jan 23 10:02:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:40.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:40.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 141 op/s
Jan 23 10:02:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4009350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:42.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:02:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:42.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:02:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 141 op/s
Jan 23 10:02:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea4009350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:02:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:44.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:02:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:44.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:45 compute-0 podman[160880]: 2026-01-23 10:02:45.936433463 +0000 UTC m=+13.142381977 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 23 10:02:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 85 B/s wr, 150 op/s
Jan 23 10:02:46 compute-0 ceph-mon[74335]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s
Jan 23 10:02:46 compute-0 ceph-mon[74335]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 141 op/s
Jan 23 10:02:46 compute-0 podman[161070]: 2026-01-23 10:02:46.061293922 +0000 UTC m=+0.025107669 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 23 10:02:46 compute-0 podman[161070]: 2026-01-23 10:02:46.226508303 +0000 UTC m=+0.190322020 container create 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:02:46 compute-0 python3[160866]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 23 10:02:46 compute-0 sudo[160864]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:02:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:46.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:02:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:46.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:02:46.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:02:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:47 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:02:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 85 B/s wr, 97 op/s
Jan 23 10:02:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:48 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:48 compute-0 sudo[161259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwigjmzigmtvyzmdahxkhkrmqlhzckub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162568.394131-1260-266001526769172/AnsiballZ_stat.py'
Jan 23 10:02:48 compute-0 sudo[161259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:48 compute-0 ceph-mon[74335]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 141 op/s
Jan 23 10:02:48 compute-0 ceph-mon[74335]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 85 B/s wr, 150 op/s
Jan 23 10:02:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:02:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:48.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:02:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:48 compute-0 python3.9[161261]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:02:48 compute-0 sudo[161259]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:49 compute-0 sudo[161413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obardoovmoyncfzwwwnsmwfmcnrbmrtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162569.2252142-1287-26844153412478/AnsiballZ_file.py'
Jan 23 10:02:49 compute-0 sudo[161413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:49 compute-0 python3.9[161415]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:49 compute-0 sudo[161413]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:49 compute-0 sudo[161490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twspbqcjwtwdjckcpkmhpmnelyxkkirm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162569.2252142-1287-26844153412478/AnsiballZ_stat.py'
Jan 23 10:02:49 compute-0 sudo[161490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:49] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Jan 23 10:02:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:49] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Jan 23 10:02:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:02:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:02:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:02:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:02:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:02:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:02:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:02:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:02:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 85 B/s wr, 77 op/s
Jan 23 10:02:50 compute-0 python3.9[161492]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:02:50 compute-0 sudo[161490]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:50 compute-0 ceph-mon[74335]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 85 B/s wr, 97 op/s
Jan 23 10:02:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:02:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:50.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:02:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:02:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:02:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:50 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:02:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:02:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:50.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:02:51 compute-0 sudo[161643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwtoglgbafcfjvqduxkkdigszafagzaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162570.2442634-1287-108619672754775/AnsiballZ_copy.py'
Jan 23 10:02:51 compute-0 sudo[161643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:51 compute-0 python3.9[161645]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769162570.2442634-1287-108619672754775/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:02:51 compute-0 sudo[161643]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:52 compute-0 sudo[161722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oysjcaiszkswwjgmzwgwabxdczeziqhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162570.2442634-1287-108619672754775/AnsiballZ_systemd.py'
Jan 23 10:02:52 compute-0 sudo[161722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 938 B/s wr, 83 op/s
Jan 23 10:02:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:52 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:52.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:52.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:53 : epoch 697345ce : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:02:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 938 B/s wr, 15 op/s
Jan 23 10:02:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:54 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:02:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:54.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:02:55 compute-0 python3.9[161724]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 10:02:55 compute-0 systemd[1]: Reloading.
Jan 23 10:02:55 compute-0 systemd-rc-local-generator[161775]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:02:55 compute-0 systemd-sysv-generator[161778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:02:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:02:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:02:55 compute-0 ceph-mon[74335]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 85 B/s wr, 77 op/s
Jan 23 10:02:55 compute-0 sudo[161722]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:56 compute-0 sudo[161857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etbupduwifukfubynurhfziulnyflufj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162570.2442634-1287-108619672754775/AnsiballZ_systemd.py'
Jan 23 10:02:56 compute-0 sudo[161857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1023 B/s wr, 15 op/s
Jan 23 10:02:56 compute-0 python3.9[161859]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:02:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:56 compute-0 systemd[1]: Reloading.
Jan 23 10:02:56 compute-0 systemd-sysv-generator[161892]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:02:56 compute-0 systemd-rc-local-generator[161888]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:02:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:56 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:02:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:56.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:02:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:56 compute-0 ceph-mon[74335]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 938 B/s wr, 83 op/s
Jan 23 10:02:56 compute-0 ceph-mon[74335]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 938 B/s wr, 15 op/s
Jan 23 10:02:56 compute-0 ceph-mon[74335]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1023 B/s wr, 15 op/s
Jan 23 10:02:56 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 23 10:02:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:02:56.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:02:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242a63e32ddb7bea9e5703add0577d3c52d4285ce1ab2ed72eb9293ba57f4e99/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242a63e32ddb7bea9e5703add0577d3c52d4285ce1ab2ed72eb9293ba57f4e99/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 10:02:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d.
Jan 23 10:02:57 compute-0 podman[161901]: 2026-01-23 10:02:57.35347538 +0000 UTC m=+0.410332244 container init 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: + sudo -E kolla_set_configs
Jan 23 10:02:57 compute-0 podman[161901]: 2026-01-23 10:02:57.385676489 +0000 UTC m=+0.442533323 container start 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 23 10:02:57 compute-0 edpm-start-podman-container[161901]: ovn_metadata_agent
Jan 23 10:02:57 compute-0 edpm-start-podman-container[161900]: Creating additional drop-in dependency for "ovn_metadata_agent" (7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d)
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Validating config file
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Copying service configuration files
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Writing out command to execute
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: ++ cat /run_command
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: + CMD=neutron-ovn-metadata-agent
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: + ARGS=
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: + sudo kolla_copy_cacerts
Jan 23 10:02:57 compute-0 systemd[1]: Reloading.
Jan 23 10:02:57 compute-0 podman[161923]: 2026-01-23 10:02:57.530699239 +0000 UTC m=+0.132832638 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: Running command: 'neutron-ovn-metadata-agent'
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: + [[ ! -n '' ]]
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: + . kolla_extend_start
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: + umask 0022
Jan 23 10:02:57 compute-0 ovn_metadata_agent[161916]: + exec neutron-ovn-metadata-agent
Jan 23 10:02:57 compute-0 systemd-sysv-generator[161993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:02:57 compute-0 systemd-rc-local-generator[161988]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:02:57 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 23 10:02:57 compute-0 sudo[161857]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 938 B/s wr, 7 op/s
Jan 23 10:02:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:02:58 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:02:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:58.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:58 compute-0 python3.9[162155]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 10:02:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:02:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:02:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:58.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:02:58 compute-0 sudo[162162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:02:58 compute-0 sudo[162162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:02:58 compute-0 sudo[162162]: pam_unix(sudo:session): session closed for user root
Jan 23 10:02:59 compute-0 ceph-mon[74335]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 938 B/s wr, 7 op/s
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.709 161921 INFO neutron.common.config [-] Logging enabled!
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.710 161921 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.710 161921 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.710 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.710 161921 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.710 161921 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.711 161921 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.711 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.711 161921 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.711 161921 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.711 161921 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.711 161921 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.711 161921 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.711 161921 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.712 161921 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.712 161921 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.712 161921 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.712 161921 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.712 161921 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.712 161921 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.712 161921 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.713 161921 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.713 161921 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.713 161921 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.713 161921 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.713 161921 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.713 161921 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.713 161921 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.714 161921 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.714 161921 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.714 161921 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.714 161921 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.714 161921 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.714 161921 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.715 161921 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.715 161921 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.715 161921 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.715 161921 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.715 161921 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.715 161921 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.715 161921 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.716 161921 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.716 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.716 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.716 161921 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.716 161921 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.716 161921 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.716 161921 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.716 161921 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.716 161921 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.717 161921 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.718 161921 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.718 161921 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.718 161921 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.718 161921 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.718 161921 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.718 161921 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.718 161921 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.718 161921 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.719 161921 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.719 161921 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.719 161921 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.719 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.719 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.719 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.719 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.719 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.719 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.720 161921 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.720 161921 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.720 161921 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.720 161921 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.720 161921 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.720 161921 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.720 161921 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.720 161921 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.720 161921 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.721 161921 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.721 161921 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.721 161921 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.721 161921 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.721 161921 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.721 161921 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.722 161921 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.723 161921 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.723 161921 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.723 161921 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.723 161921 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.723 161921 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.723 161921 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.723 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.723 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.724 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.724 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.724 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.724 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.724 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.724 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.724 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.724 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.724 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.725 161921 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.725 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.725 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.725 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.725 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.725 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.725 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.725 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.725 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.726 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.726 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.726 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.726 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.726 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.726 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.726 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.726 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.726 161921 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.727 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.727 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.727 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.727 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.727 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.727 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.727 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.727 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.728 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.728 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.728 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.728 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.728 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.728 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.728 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.729 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.729 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.729 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.729 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.729 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.729 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.729 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.730 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.730 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.730 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.730 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.730 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.730 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.730 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.730 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.730 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.731 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.731 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.731 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.731 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.731 161921 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.731 161921 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.731 161921 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.732 161921 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.732 161921 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.732 161921 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.732 161921 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.732 161921 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.732 161921 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.732 161921 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.732 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.733 161921 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.733 161921 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.733 161921 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.733 161921 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.733 161921 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.733 161921 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.733 161921 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.734 161921 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.734 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.734 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.734 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.734 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.734 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.734 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.735 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.735 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.735 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.735 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.735 161921 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.735 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.735 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.736 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.736 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.736 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.736 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.736 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.736 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.737 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.737 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.737 161921 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.737 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.737 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.737 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.737 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.737 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.737 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.738 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.738 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.738 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.738 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.738 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.738 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.738 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.738 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.739 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.739 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.739 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.739 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.739 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.739 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.739 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.739 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.740 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.740 161921 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.740 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.740 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.740 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.740 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.740 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.740 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.741 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.741 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.741 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.741 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.741 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.741 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.741 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.742 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.742 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.742 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.742 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.742 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.742 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.742 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.743 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.743 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.743 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.743 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.743 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.743 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.743 161921 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.743 161921 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.744 161921 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.744 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.744 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.744 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.744 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.744 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.744 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.745 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.745 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.745 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.745 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.745 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.745 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.745 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.745 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.746 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.746 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.746 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.746 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.746 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.746 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.746 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.746 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.747 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.747 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.747 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.747 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.747 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.747 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.747 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.748 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.748 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.748 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.748 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.748 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.748 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.748 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.748 161921 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.748 161921 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.759 161921 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.759 161921 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.759 161921 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.760 161921 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.760 161921 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.778 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 57e418b8-f514-4483-8675-f32d2dcd8cea (UUID: 57e418b8-f514-4483-8675-f32d2dcd8cea) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.801 161921 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.801 161921 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.801 161921 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.801 161921 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.808 161921 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.814 161921 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.822 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '57e418b8-f514-4483-8675-f32d2dcd8cea'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], external_ids={}, name=57e418b8-f514-4483-8675-f32d2dcd8cea, nb_cfg_timestamp=1769162503600, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.823 161921 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fbb7619af40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.824 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.824 161921 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.824 161921 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.825 161921 INFO oslo_service.service [-] Starting 1 workers
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.830 161921 DEBUG oslo_service.service [-] Started child 162303 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.834 161921 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpm91q9wva/privsep.sock']
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.834 162303 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-361436'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.858 162303 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.859 162303 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.859 162303 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.862 162303 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.868 162303 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 10:02:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:02:59.875 162303 INFO eventlet.wsgi.server [-] (162303) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 23 10:02:59 compute-0 sudo[162335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oabgcvfjcqkmryxeysrqqmxxzzmoxoak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162579.636389-1422-123876169872618/AnsiballZ_stat.py'
Jan 23 10:02:59 compute-0 sudo[162335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:02:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:59] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Jan 23 10:02:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:02:59] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Jan 23 10:03:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:00 compute-0 python3.9[162337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:03:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 938 B/s wr, 7 op/s
Jan 23 10:03:00 compute-0 sudo[162335]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:00 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 23 10:03:00 compute-0 sudo[162463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agvcrgwutiolaffyruslyllvfhccammq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162579.636389-1422-123876169872618/AnsiballZ_copy.py'
Jan 23 10:03:00 compute-0 sudo[162463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:00 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:00.669 161921 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 23 10:03:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:00.670 161921 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpm91q9wva/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 23 10:03:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:00.478 162436 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 10:03:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:00.485 162436 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 10:03:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:00.487 162436 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 23 10:03:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:00.487 162436 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162436
Jan 23 10:03:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:00.673 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4f0627-7e9a-4cc9-8ca2-eb531fa8052a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:03:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:00.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:00 compute-0 python3.9[162465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162579.636389-1422-123876169872618/.source.yaml _original_basename=.k9yo_3ww follow=False checksum=d88282ad6bcd11f7bd2cbc3f4703eb6122d6b05d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:00 compute-0 sudo[162463]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:00.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:01 compute-0 sshd-session[152708]: Connection closed by 192.168.122.30 port 55798
Jan 23 10:03:01 compute-0 sshd-session[152705]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:03:01 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 23 10:03:01 compute-0 systemd[1]: session-52.scope: Consumed 1min 648ms CPU time.
Jan 23 10:03:01 compute-0 systemd-logind[784]: Session 52 logged out. Waiting for processes to exit.
Jan 23 10:03:01 compute-0 systemd-logind[784]: Removed session 52.
Jan 23 10:03:01 compute-0 ceph-mon[74335]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 938 B/s wr, 7 op/s
Jan 23 10:03:01 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:01.315 162436 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:03:01 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:01.315 162436 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:03:01 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:01.316 162436 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:03:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100301 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:03:01 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:01.980 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[3a105aba-39f2-4cf3-9de4-49f5af923939]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:03:01 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:01.983 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, column=external_ids, values=({'neutron:ovn-metadata-id': 'a0eac155-3a00-57a9-bf62-261a194fbe1e'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.010 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.019 161921 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.019 161921 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.020 161921 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.020 161921 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.020 161921 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.020 161921 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.020 161921 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.020 161921 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.020 161921 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.021 161921 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.021 161921 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.021 161921 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.021 161921 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.021 161921 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.021 161921 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.021 161921 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.021 161921 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.022 161921 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.022 161921 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.022 161921 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.022 161921 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.022 161921 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.022 161921 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.022 161921 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.022 161921 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.023 161921 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.023 161921 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.023 161921 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.023 161921 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.023 161921 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.023 161921 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.023 161921 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.024 161921 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.024 161921 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.024 161921 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.024 161921 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.024 161921 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.024 161921 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.024 161921 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.025 161921 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.025 161921 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.025 161921 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.025 161921 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.025 161921 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.025 161921 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.025 161921 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.025 161921 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.026 161921 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.026 161921 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.026 161921 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.026 161921 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.026 161921 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.026 161921 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.026 161921 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.026 161921 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.027 161921 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.027 161921 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.027 161921 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.027 161921 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.027 161921 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.027 161921 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.027 161921 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.027 161921 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.028 161921 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.028 161921 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.028 161921 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.028 161921 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.028 161921 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.028 161921 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.028 161921 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.028 161921 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.029 161921 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.029 161921 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.029 161921 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.029 161921 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.029 161921 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.029 161921 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.029 161921 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.029 161921 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.030 161921 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.030 161921 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.030 161921 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.030 161921 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.030 161921 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.030 161921 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.030 161921 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.030 161921 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.030 161921 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.031 161921 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.031 161921 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.031 161921 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.031 161921 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.031 161921 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.031 161921 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.031 161921 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.031 161921 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.032 161921 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.032 161921 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.032 161921 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.032 161921 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.032 161921 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.032 161921 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.032 161921 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.032 161921 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.033 161921 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.033 161921 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.033 161921 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.033 161921 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.033 161921 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.033 161921 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.033 161921 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.034 161921 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.034 161921 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.034 161921 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.034 161921 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.034 161921 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.034 161921 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.034 161921 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.035 161921 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.035 161921 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.035 161921 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.035 161921 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.035 161921 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.035 161921 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.036 161921 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.036 161921 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.036 161921 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.036 161921 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.036 161921 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.036 161921 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.036 161921 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.036 161921 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.037 161921 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.037 161921 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.037 161921 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.037 161921 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.037 161921 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.037 161921 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.037 161921 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.038 161921 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.038 161921 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.038 161921 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.038 161921 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.038 161921 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.038 161921 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.038 161921 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.039 161921 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.039 161921 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.039 161921 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.039 161921 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.039 161921 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.039 161921 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.039 161921 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.040 161921 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.040 161921 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.040 161921 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.040 161921 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.040 161921 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.040 161921 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.040 161921 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.040 161921 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.040 161921 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.041 161921 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.041 161921 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.041 161921 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.041 161921 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.041 161921 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.041 161921 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.041 161921 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.042 161921 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.042 161921 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.042 161921 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.042 161921 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.042 161921 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.042 161921 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.042 161921 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.042 161921 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.042 161921 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.043 161921 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.043 161921 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.043 161921 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.043 161921 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.043 161921 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.043 161921 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.044 161921 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.044 161921 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.044 161921 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.044 161921 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.044 161921 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.044 161921 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.045 161921 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.045 161921 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.045 161921 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.045 161921 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.045 161921 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.045 161921 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.046 161921 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.046 161921 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.046 161921 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.046 161921 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.046 161921 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.046 161921 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.046 161921 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.046 161921 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.047 161921 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.047 161921 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.047 161921 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.047 161921 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.047 161921 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.047 161921 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.047 161921 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.047 161921 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.048 161921 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.048 161921 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.048 161921 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.048 161921 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.048 161921 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.048 161921 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.048 161921 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.049 161921 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.049 161921 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.049 161921 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.049 161921 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.049 161921 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.049 161921 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.049 161921 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.049 161921 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.050 161921 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.050 161921 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.050 161921 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.050 161921 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.050 161921 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.050 161921 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.050 161921 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.050 161921 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.051 161921 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.051 161921 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.051 161921 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.051 161921 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.051 161921 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.051 161921 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.051 161921 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.052 161921 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.052 161921 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.052 161921 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.052 161921 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.052 161921 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.052 161921 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.052 161921 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.053 161921 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.053 161921 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.053 161921 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.053 161921 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.053 161921 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.053 161921 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.053 161921 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.054 161921 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.054 161921 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.054 161921 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.054 161921 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.054 161921 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.054 161921 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.054 161921 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.055 161921 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.055 161921 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.055 161921 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.055 161921 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.055 161921 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.055 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.055 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.056 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.056 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.056 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.056 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.056 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.056 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.057 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.057 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.057 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.057 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.057 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.057 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.057 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.058 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.058 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.058 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.058 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.058 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.058 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.058 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.059 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.059 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.059 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.059 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.059 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.059 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.059 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.060 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.060 161921 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.060 161921 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.060 161921 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.060 161921 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.060 161921 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:03:02 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:02.061 161921 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 10:03:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 938 B/s wr, 7 op/s
Jan 23 10:03:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:02 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:02 compute-0 ceph-mon[74335]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 938 B/s wr, 7 op/s
Jan 23 10:03:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:02.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:02.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:03:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:04 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:04.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:03:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:03:05 compute-0 ceph-mon[74335]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:03:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:03:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:03:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:06 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:06.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:06.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:06.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:03:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:06.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:03:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:06.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:03:07 compute-0 ceph-mon[74335]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:03:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:03:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c0026b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:08 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:08.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:08.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:09 compute-0 ceph-mon[74335]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:03:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:09] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 23 10:03:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:09] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 23 10:03:10 compute-0 sshd-session[162503]: Accepted publickey for zuul from 192.168.122.30 port 55622 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:03:10 compute-0 systemd-logind[784]: New session 53 of user zuul.
Jan 23 10:03:10 compute-0 systemd[1]: Started Session 53 of User zuul.
Jan 23 10:03:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:10 compute-0 sshd-session[162503]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:03:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:03:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:10 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:10.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:10.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:11 compute-0 python3.9[162657]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:03:11 compute-0 ceph-mon[74335]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:03:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:03:12 compute-0 sudo[162813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgijsdbdwrnjuovkakplpzehvamikjeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162591.797815-57-8736050300758/AnsiballZ_command.py'
Jan 23 10:03:12 compute-0 sudo[162813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:12 compute-0 python3.9[162815]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:03:12 compute-0 sudo[162813]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:12 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:12.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:12.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:13 compute-0 sudo[162930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:03:13 compute-0 sudo[162930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:13 compute-0 sudo[162930]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:13 compute-0 podman[162928]: 2026-01-23 10:03:13.566913755 +0000 UTC m=+0.091793186 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:03:13 compute-0 sudo[163053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvzczdtzyaysxfeqxcfogjtsriqhkfoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162592.9634192-90-162575043038152/AnsiballZ_systemd_service.py'
Jan 23 10:03:13 compute-0 sudo[163001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:03:13 compute-0 sudo[163053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:13 compute-0 sudo[163001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:14 compute-0 python3.9[163056]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 10:03:14 compute-0 systemd[1]: Reloading.
Jan 23 10:03:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:14 compute-0 systemd-sysv-generator[163109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:03:14 compute-0 systemd-rc-local-generator[163104]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:03:14 compute-0 sudo[163001]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:03:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:03:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:03:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:03:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:03:14 compute-0 sudo[163053]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:14 compute-0 ceph-mon[74335]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:03:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:03:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:03:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:03:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:03:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:03:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:03:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:03:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:03:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:03:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:14 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:14 compute-0 sudo[163149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:03:14 compute-0 sudo[163149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:14 compute-0 sudo[163149]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:14 compute-0 sudo[163174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:03:14 compute-0 sudo[163174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:14.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:14.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:15 compute-0 podman[163291]: 2026-01-23 10:03:15.126239871 +0000 UTC m=+0.046713362 container create ac03ae438787fcde49129f17ff2dba3937881267f9d751dcdc621ce525383272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_pascal, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:03:15 compute-0 systemd[1]: Started libpod-conmon-ac03ae438787fcde49129f17ff2dba3937881267f9d751dcdc621ce525383272.scope.
Jan 23 10:03:15 compute-0 podman[163291]: 2026-01-23 10:03:15.102791843 +0000 UTC m=+0.023265354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:03:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:03:15 compute-0 podman[163291]: 2026-01-23 10:03:15.221731876 +0000 UTC m=+0.142205387 container init ac03ae438787fcde49129f17ff2dba3937881267f9d751dcdc621ce525383272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_pascal, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Jan 23 10:03:15 compute-0 podman[163291]: 2026-01-23 10:03:15.230845797 +0000 UTC m=+0.151319288 container start ac03ae438787fcde49129f17ff2dba3937881267f9d751dcdc621ce525383272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_pascal, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:03:15 compute-0 confident_pascal[163347]: 167 167
Jan 23 10:03:15 compute-0 systemd[1]: libpod-ac03ae438787fcde49129f17ff2dba3937881267f9d751dcdc621ce525383272.scope: Deactivated successfully.
Jan 23 10:03:15 compute-0 podman[163291]: 2026-01-23 10:03:15.249030729 +0000 UTC m=+0.169504250 container attach ac03ae438787fcde49129f17ff2dba3937881267f9d751dcdc621ce525383272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 10:03:15 compute-0 podman[163291]: 2026-01-23 10:03:15.249575085 +0000 UTC m=+0.170048576 container died ac03ae438787fcde49129f17ff2dba3937881267f9d751dcdc621ce525383272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:03:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-31eafd1a08d1a2d281d60b52d3928651ecf91c2f6a17809fa898fc7402da856c-merged.mount: Deactivated successfully.
Jan 23 10:03:15 compute-0 podman[163291]: 2026-01-23 10:03:15.297380019 +0000 UTC m=+0.217853510 container remove ac03ae438787fcde49129f17ff2dba3937881267f9d751dcdc621ce525383272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:03:15 compute-0 systemd[1]: libpod-conmon-ac03ae438787fcde49129f17ff2dba3937881267f9d751dcdc621ce525383272.scope: Deactivated successfully.
Jan 23 10:03:15 compute-0 python3.9[163387]: ansible-ansible.builtin.service_facts Invoked
Jan 23 10:03:15 compute-0 podman[163405]: 2026-01-23 10:03:15.483338118 +0000 UTC m=+0.059555625 container create d2acc8e543f102403e689caf52988e18800cc8345527305771493e9f62af8a53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:03:15 compute-0 systemd[1]: Started libpod-conmon-d2acc8e543f102403e689caf52988e18800cc8345527305771493e9f62af8a53.scope.
Jan 23 10:03:15 compute-0 network[163438]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 10:03:15 compute-0 network[163441]: 'network-scripts' will be removed from distribution in near future.
Jan 23 10:03:15 compute-0 network[163442]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 10:03:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:03:15 compute-0 podman[163405]: 2026-01-23 10:03:15.451570472 +0000 UTC m=+0.027787999 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c0c36be28a98277f0a8bb0a8bf80b4e1d8d061a2948ca0ceb8411a18d75b20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c0c36be28a98277f0a8bb0a8bf80b4e1d8d061a2948ca0ceb8411a18d75b20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c0c36be28a98277f0a8bb0a8bf80b4e1d8d061a2948ca0ceb8411a18d75b20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c0c36be28a98277f0a8bb0a8bf80b4e1d8d061a2948ca0ceb8411a18d75b20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c0c36be28a98277f0a8bb0a8bf80b4e1d8d061a2948ca0ceb8411a18d75b20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:15 compute-0 podman[163405]: 2026-01-23 10:03:15.568138904 +0000 UTC m=+0.144356431 container init d2acc8e543f102403e689caf52988e18800cc8345527305771493e9f62af8a53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 10:03:15 compute-0 podman[163405]: 2026-01-23 10:03:15.57574373 +0000 UTC m=+0.151961237 container start d2acc8e543f102403e689caf52988e18800cc8345527305771493e9f62af8a53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:03:15 compute-0 podman[163405]: 2026-01-23 10:03:15.623502863 +0000 UTC m=+0.199720370 container attach d2acc8e543f102403e689caf52988e18800cc8345527305771493e9f62af8a53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:03:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:15 compute-0 intelligent_hellman[163439]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:03:15 compute-0 intelligent_hellman[163439]: --> All data devices are unavailable
Jan 23 10:03:15 compute-0 podman[163405]: 2026-01-23 10:03:15.943066332 +0000 UTC m=+0.519283839 container died d2acc8e543f102403e689caf52988e18800cc8345527305771493e9f62af8a53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 10:03:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:03:16 compute-0 systemd[1]: libpod-d2acc8e543f102403e689caf52988e18800cc8345527305771493e9f62af8a53.scope: Deactivated successfully.
Jan 23 10:03:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:16 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:16.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:16.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:16.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:03:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:16.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:03:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:16.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:03:17 compute-0 ceph-mon[74335]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:03:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:03:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:03:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:03:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:03:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:03:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:03:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6c0c36be28a98277f0a8bb0a8bf80b4e1d8d061a2948ca0ceb8411a18d75b20-merged.mount: Deactivated successfully.
Jan 23 10:03:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:18 compute-0 podman[163405]: 2026-01-23 10:03:18.1339168 +0000 UTC m=+2.710134307 container remove d2acc8e543f102403e689caf52988e18800cc8345527305771493e9f62af8a53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hellman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:03:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:18 compute-0 systemd[1]: libpod-conmon-d2acc8e543f102403e689caf52988e18800cc8345527305771493e9f62af8a53.scope: Deactivated successfully.
Jan 23 10:03:18 compute-0 sudo[163174]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:18 compute-0 sudo[163583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:03:18 compute-0 sudo[163583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:18 compute-0 sudo[163583]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:18 compute-0 sudo[163632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:03:18 compute-0 sudo[163632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:18 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:18.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:18 compute-0 podman[163696]: 2026-01-23 10:03:18.812810152 +0000 UTC m=+0.109361169 container create a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Jan 23 10:03:18 compute-0 podman[163696]: 2026-01-23 10:03:18.727433189 +0000 UTC m=+0.023984236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:03:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:18.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:18 compute-0 ceph-mon[74335]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:03:18 compute-0 systemd[1]: Started libpod-conmon-a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19.scope.
Jan 23 10:03:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:03:19 compute-0 sudo[163714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:03:19 compute-0 sudo[163714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:19 compute-0 podman[163696]: 2026-01-23 10:03:19.045491683 +0000 UTC m=+0.342042720 container init a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 23 10:03:19 compute-0 sudo[163714]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:19 compute-0 podman[163696]: 2026-01-23 10:03:19.052663066 +0000 UTC m=+0.349214083 container start a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 10:03:19 compute-0 podman[163696]: 2026-01-23 10:03:19.057302025 +0000 UTC m=+0.353853072 container attach a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 23 10:03:19 compute-0 systemd[1]: libpod-a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19.scope: Deactivated successfully.
Jan 23 10:03:19 compute-0 gracious_kare[163712]: 167 167
Jan 23 10:03:19 compute-0 conmon[163712]: conmon a24691646170088288cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19.scope/container/memory.events
Jan 23 10:03:19 compute-0 podman[163696]: 2026-01-23 10:03:19.060094248 +0000 UTC m=+0.356645265 container died a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 10:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf3fee0adb545fd0bf0d7f9d5843c4c5c0eb1bfc058ddfce542217d328f1d1c0-merged.mount: Deactivated successfully.
Jan 23 10:03:19 compute-0 podman[163696]: 2026-01-23 10:03:19.266083553 +0000 UTC m=+0.562634570 container remove a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:03:19 compute-0 systemd[1]: libpod-conmon-a24691646170088288cd771a9b4f1177fa6f0c7c0c91733ff4641d29eacbaf19.scope: Deactivated successfully.
Jan 23 10:03:19 compute-0 podman[163761]: 2026-01-23 10:03:19.443408735 +0000 UTC m=+0.052777673 container create 906573fe6e74f210eb5c18d8110159031c69f84ace936ba433b5718cfa11269c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:03:19 compute-0 systemd[1]: Started libpod-conmon-906573fe6e74f210eb5c18d8110159031c69f84ace936ba433b5718cfa11269c.scope.
Jan 23 10:03:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:03:19 compute-0 podman[163761]: 2026-01-23 10:03:19.420840543 +0000 UTC m=+0.030209501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbc74e0e090cd687297fbbad81cb17349b40a2f5b4bc7d486262b731a939b69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbc74e0e090cd687297fbbad81cb17349b40a2f5b4bc7d486262b731a939b69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbc74e0e090cd687297fbbad81cb17349b40a2f5b4bc7d486262b731a939b69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbc74e0e090cd687297fbbad81cb17349b40a2f5b4bc7d486262b731a939b69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:19 compute-0 podman[163761]: 2026-01-23 10:03:19.749623157 +0000 UTC m=+0.358992115 container init 906573fe6e74f210eb5c18d8110159031c69f84ace936ba433b5718cfa11269c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 10:03:19 compute-0 podman[163761]: 2026-01-23 10:03:19.75879202 +0000 UTC m=+0.368160958 container start 906573fe6e74f210eb5c18d8110159031c69f84ace936ba433b5718cfa11269c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:03:19 compute-0 podman[163761]: 2026-01-23 10:03:19.773877999 +0000 UTC m=+0.383246937 container attach 906573fe6e74f210eb5c18d8110159031c69f84ace936ba433b5718cfa11269c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:03:19 compute-0 ceph-mon[74335]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:03:19
Jan 23 10:03:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:03:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:03:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.nfs', '.rgw.root', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'images']
Jan 23 10:03:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:03:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:19] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 23 10:03:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:19] "GET /metrics HTTP/1.1" 200 48434 "" "Prometheus/2.51.0"
Jan 23 10:03:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:03:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]: {
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:     "1": [
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:         {
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "devices": [
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "/dev/loop3"
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             ],
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "lv_name": "ceph_lv0",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "lv_size": "21470642176",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "name": "ceph_lv0",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "tags": {
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.cluster_name": "ceph",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.crush_device_class": "",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.encrypted": "0",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.osd_id": "1",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.type": "block",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.vdo": "0",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:                 "ceph.with_tpm": "0"
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             },
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "type": "block",
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:             "vg_name": "ceph_vg0"
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:         }
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]:     ]
Jan 23 10:03:20 compute-0 intelligent_wilson[163777]: }
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:03:20 compute-0 systemd[1]: libpod-906573fe6e74f210eb5c18d8110159031c69f84ace936ba433b5718cfa11269c.scope: Deactivated successfully.
Jan 23 10:03:20 compute-0 podman[163761]: 2026-01-23 10:03:20.088957084 +0000 UTC m=+0.698326042 container died 906573fe6e74f210eb5c18d8110159031c69f84ace936ba433b5718cfa11269c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:03:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:03:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbbc74e0e090cd687297fbbad81cb17349b40a2f5b4bc7d486262b731a939b69-merged.mount: Deactivated successfully.
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:03:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:03:20 compute-0 podman[163761]: 2026-01-23 10:03:20.220473752 +0000 UTC m=+0.829842690 container remove 906573fe6e74f210eb5c18d8110159031c69f84ace936ba433b5718cfa11269c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:03:20 compute-0 systemd[1]: libpod-conmon-906573fe6e74f210eb5c18d8110159031c69f84ace936ba433b5718cfa11269c.scope: Deactivated successfully.
Jan 23 10:03:20 compute-0 sudo[163632]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:20 compute-0 sudo[163802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:03:20 compute-0 sudo[163802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:20 compute-0 sudo[163802]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:20 compute-0 sudo[163827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:03:20 compute-0 sudo[163827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e80004c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:20 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:20.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:20 compute-0 podman[163932]: 2026-01-23 10:03:20.82570226 +0000 UTC m=+0.028221072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:03:21 compute-0 podman[163932]: 2026-01-23 10:03:21.009471454 +0000 UTC m=+0.211990226 container create acb0b352b7db8c28b6d4024efc357895f402cd0928c4b191c9292d7909f1efb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Jan 23 10:03:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:03:21 compute-0 ceph-mon[74335]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:21 compute-0 systemd[1]: Started libpod-conmon-acb0b352b7db8c28b6d4024efc357895f402cd0928c4b191c9292d7909f1efb6.scope.
Jan 23 10:03:21 compute-0 sudo[164030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wefzcqrukpeoimifebxfkumcqcbqlopv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162600.7389383-147-39343329190219/AnsiballZ_systemd_service.py'
Jan 23 10:03:21 compute-0 sudo[164030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:03:21 compute-0 podman[163932]: 2026-01-23 10:03:21.329393762 +0000 UTC m=+0.531912564 container init acb0b352b7db8c28b6d4024efc357895f402cd0928c4b191c9292d7909f1efb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:03:21 compute-0 podman[163932]: 2026-01-23 10:03:21.336256907 +0000 UTC m=+0.538775679 container start acb0b352b7db8c28b6d4024efc357895f402cd0928c4b191c9292d7909f1efb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:03:21 compute-0 charming_darwin[164034]: 167 167
Jan 23 10:03:21 compute-0 systemd[1]: libpod-acb0b352b7db8c28b6d4024efc357895f402cd0928c4b191c9292d7909f1efb6.scope: Deactivated successfully.
Jan 23 10:03:21 compute-0 podman[163932]: 2026-01-23 10:03:21.357325194 +0000 UTC m=+0.559843966 container attach acb0b352b7db8c28b6d4024efc357895f402cd0928c4b191c9292d7909f1efb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_darwin, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:03:21 compute-0 podman[163932]: 2026-01-23 10:03:21.357749527 +0000 UTC m=+0.560268299 container died acb0b352b7db8c28b6d4024efc357895f402cd0928c4b191c9292d7909f1efb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:03:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fbe162584e39f8084c1719c17fb1ae27afa67ee6fd86b9dd65e3e0ef964594d-merged.mount: Deactivated successfully.
Jan 23 10:03:21 compute-0 python3.9[164035]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:03:21 compute-0 sudo[164030]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:21 compute-0 podman[163932]: 2026-01-23 10:03:21.550627492 +0000 UTC m=+0.753146264 container remove acb0b352b7db8c28b6d4024efc357895f402cd0928c4b191c9292d7909f1efb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:03:21 compute-0 systemd[1]: libpod-conmon-acb0b352b7db8c28b6d4024efc357895f402cd0928c4b191c9292d7909f1efb6.scope: Deactivated successfully.
Jan 23 10:03:21 compute-0 podman[164143]: 2026-01-23 10:03:21.723422449 +0000 UTC m=+0.027559422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:03:21 compute-0 sudo[164224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icqyhkvxmlyisposxymsqfboadxafpnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162601.5738103-147-224631671380178/AnsiballZ_systemd_service.py'
Jan 23 10:03:21 compute-0 sudo[164224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:21 compute-0 podman[164143]: 2026-01-23 10:03:21.934686432 +0000 UTC m=+0.238823375 container create ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_engelbart, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 23 10:03:21 compute-0 systemd[1]: Started libpod-conmon-ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44.scope.
Jan 23 10:03:22 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:03:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3de86a553c39d8197c52219b28be4a209c30441fa53e8c4b0ee6dc0e0ed70d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3de86a553c39d8197c52219b28be4a209c30441fa53e8c4b0ee6dc0e0ed70d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3de86a553c39d8197c52219b28be4a209c30441fa53e8c4b0ee6dc0e0ed70d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3de86a553c39d8197c52219b28be4a209c30441fa53e8c4b0ee6dc0e0ed70d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e7c002730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:03:22 compute-0 python3.9[164226]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:03:22 compute-0 sudo[164224]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:22 compute-0 sudo[164385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hblfhwboccfiefkahbegzfqljdwkfetd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162602.331317-147-98063217882865/AnsiballZ_systemd_service.py'
Jan 23 10:03:22 compute-0 sudo[164385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:22 compute-0 podman[164143]: 2026-01-23 10:03:22.598018341 +0000 UTC m=+0.902155304 container init ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_engelbart, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:03:22 compute-0 podman[164143]: 2026-01-23 10:03:22.606997448 +0000 UTC m=+0.911134391 container start ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_engelbart, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 10:03:22 compute-0 podman[164143]: 2026-01-23 10:03:22.624008685 +0000 UTC m=+0.928145658 container attach ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_engelbart, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:03:22 compute-0 ceph-mon[74335]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:03:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:22 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000059s ======
Jan 23 10:03:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:22.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000059s
Jan 23 10:03:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:22.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:22 compute-0 python3.9[164387]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:03:22 compute-0 sudo[164385]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:23 compute-0 lvm[164559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:03:23 compute-0 lvm[164559]: VG ceph_vg0 finished
Jan 23 10:03:23 compute-0 infallible_engelbart[164229]: {}
Jan 23 10:03:23 compute-0 systemd[1]: libpod-ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44.scope: Deactivated successfully.
Jan 23 10:03:23 compute-0 podman[164143]: 2026-01-23 10:03:23.383715524 +0000 UTC m=+1.687852477 container died ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:03:23 compute-0 systemd[1]: libpod-ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44.scope: Consumed 1.255s CPU time.
Jan 23 10:03:23 compute-0 sudo[164617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dueluzesgcpydblbnfgoqgadvxmkruhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162603.1230025-147-207787115939327/AnsiballZ_systemd_service.py'
Jan 23 10:03:23 compute-0 sudo[164617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:24 compute-0 python3.9[164626]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:03:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:24 compute-0 sudo[164617]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:24 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:24 compute-0 sudo[164779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlgfzyvbelvbmmmbijfjdrzswucaatwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162604.5636373-147-268602413716796/AnsiballZ_systemd_service.py'
Jan 23 10:03:24 compute-0 sudo[164779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:24.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:25 compute-0 python3.9[164781]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:03:25 compute-0 sudo[164779]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:25 compute-0 ceph-mon[74335]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3de86a553c39d8197c52219b28be4a209c30441fa53e8c4b0ee6dc0e0ed70d9-merged.mount: Deactivated successfully.
Jan 23 10:03:25 compute-0 podman[164143]: 2026-01-23 10:03:25.408714982 +0000 UTC m=+3.712851955 container remove ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 10:03:25 compute-0 systemd[1]: libpod-conmon-ff8ddc53c274c4bf8341c3dd9cd5b35df45567b0c069fec8d13f68c9ae1d4a44.scope: Deactivated successfully.
Jan 23 10:03:25 compute-0 sudo[163827]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:03:25 compute-0 sudo[164934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mphnnitpypavinagypvbsutukrdphmex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162605.4667752-147-215085070522656/AnsiballZ_systemd_service.py'
Jan 23 10:03:25 compute-0 sudo[164934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:03:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:03:26 compute-0 python3.9[164936]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:03:26 compute-0 sudo[164934]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:03:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:26 compute-0 sudo[165088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbhvjkigzbazhikagctvznsaogurkjri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162606.2300978-147-46708685494632/AnsiballZ_systemd_service.py'
Jan 23 10:03:26 compute-0 sudo[165088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:03:26 compute-0 sudo[165091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:03:26 compute-0 sudo[165091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:26 compute-0 sudo[165091]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:26 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:26.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:26 compute-0 python3.9[165090]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:03:26 compute-0 sudo[165088]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:26.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:26.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:03:27 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:03:27 compute-0 ceph-mon[74335]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:03:27 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:03:27 compute-0 sudo[165266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybfqbyhbzyhiserzthzmiysuktchhalt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162607.1833987-303-181190033202875/AnsiballZ_file.py'
Jan 23 10:03:27 compute-0 sudo[165266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:27 compute-0 python3.9[165268]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:27 compute-0 sudo[165266]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:28 compute-0 sudo[165430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmnpgrfnkvdtjyqijoobddvknmgswmlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162607.97211-303-272185272716883/AnsiballZ_file.py'
Jan 23 10:03:28 compute-0 sudo[165430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:28 compute-0 podman[165394]: 2026-01-23 10:03:28.277180964 +0000 UTC m=+0.083301681 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 10:03:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:28 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:28.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:28.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:28 compute-0 python3.9[165438]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:28 compute-0 sudo[165430]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:29 compute-0 sudo[165592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwteqefuhiwefclqbmbfvigadefoosaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162609.039828-303-276407934185379/AnsiballZ_file.py'
Jan 23 10:03:29 compute-0 sudo[165592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:29 compute-0 python3.9[165594]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:29 compute-0 ceph-mon[74335]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:29 compute-0 sudo[165592]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:29] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:03:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:29] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:03:30 compute-0 sudo[165745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfrjbsgmlequlnomuqntzlimrmhpicdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162609.724773-303-139332005037139/AnsiballZ_file.py'
Jan 23 10:03:30 compute-0 sudo[165745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78005050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:30 compute-0 python3.9[165747]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:30 compute-0 sudo[165745]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:30 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:30 compute-0 sudo[165898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjeugqjkmprddvstyfypbjrcryxecvxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162610.369133-303-58021026762549/AnsiballZ_file.py'
Jan 23 10:03:30 compute-0 sudo[165898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:30 compute-0 ceph-mon[74335]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:30.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:30 compute-0 python3.9[165900]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:30.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:30 compute-0 sudo[165898]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:31 compute-0 sudo[166050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nljdyfylciorxxltpgwibzjhaqzqmzcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162610.9945695-303-68934794292128/AnsiballZ_file.py'
Jan 23 10:03:31 compute-0 sudo[166050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:31 compute-0 python3.9[166052]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:31 compute-0 sudo[166050]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:31 compute-0 sudo[166203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawoateievxfucfbztgkwzkfabjelvzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162611.6497333-303-24306213875547/AnsiballZ_file.py'
Jan 23 10:03:31 compute-0 sudo[166203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:32 compute-0 python3.9[166205]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:03:32 compute-0 sudo[166203]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78005050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:32 compute-0 sudo[166356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufdbuujcxnhrnxrbdhgoirursknulbcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162612.3883917-453-77023705115619/AnsiballZ_file.py'
Jan 23 10:03:32 compute-0 sudo[166356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:32 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:32.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:32 compute-0 python3.9[166358]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:32 compute-0 sudo[166356]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:32.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:33 compute-0 ceph-mon[74335]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:03:33 compute-0 sudo[166508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxzkplyhwqeenvxfgwwvgqrkqctcrztp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162613.0031223-453-182088870260848/AnsiballZ_file.py'
Jan 23 10:03:33 compute-0 sudo[166508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:33 compute-0 python3.9[166510]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:33 compute-0 sudo[166508]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:33 compute-0 sudo[166661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrwmwxgerwmglngzxxnejctxsqnkxfqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162613.6386218-453-129344999528490/AnsiballZ_file.py'
Jan 23 10:03:33 compute-0 sudo[166661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:34 compute-0 python3.9[166663]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:34 compute-0 sudo[166661]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:34 compute-0 sudo[166814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zityhlqesrfttinwlgaggvfglqekwvkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162614.2769372-453-29325137089219/AnsiballZ_file.py'
Jan 23 10:03:34 compute-0 sudo[166814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:34 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78005050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:34.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:34 compute-0 python3.9[166816]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:34 compute-0 sudo[166814]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:03:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:03:35 compute-0 ceph-mon[74335]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:03:35 compute-0 sudo[166966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agftngiwimapgczbznchnaleihdqjfkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162615.0193005-453-236848059424722/AnsiballZ_file.py'
Jan 23 10:03:35 compute-0 sudo[166966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:35 compute-0 python3.9[166968]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:35 compute-0 sudo[166966]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:35 compute-0 sudo[167119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgntssxrsrldganhtptsytklrlmlxhpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162615.6911922-453-157730101373226/AnsiballZ_file.py'
Jan 23 10:03:35 compute-0 sudo[167119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:03:36 compute-0 python3.9[167121]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:36 compute-0 sudo[167119]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:36 compute-0 sudo[167272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmtxpectkqxuglvhfpslzcbpzbwwyyng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162616.3717995-453-48376411150763/AnsiballZ_file.py'
Jan 23 10:03:36 compute-0 sudo[167272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:36 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:36.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:36 compute-0 python3.9[167274]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:03:36 compute-0 sudo[167272]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:36.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:36.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:03:37 compute-0 ceph-mon[74335]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:03:37 compute-0 sudo[167424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmeseadnfvfobaqrnheryryneulixbxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162617.1982217-606-153437392354531/AnsiballZ_command.py'
Jan 23 10:03:37 compute-0 sudo[167424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:37 compute-0 python3.9[167426]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:03:37 compute-0 sudo[167424]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78005050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:38 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:38 compute-0 ceph-mon[74335]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:38.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:38 compute-0 python3.9[167580]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 10:03:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:38.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:39 compute-0 sudo[167605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:03:39 compute-0 sudo[167605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:39 compute-0 sudo[167605]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:39 compute-0 sudo[167755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdkiqdiloknuieahpsbnuiexytitnpqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162619.16159-660-255218018060288/AnsiballZ_systemd_service.py'
Jan 23 10:03:39 compute-0 sudo[167755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:39 compute-0 python3.9[167757]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 10:03:39 compute-0 systemd[1]: Reloading.
Jan 23 10:03:39 compute-0 systemd-rc-local-generator[167784]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:03:39 compute-0 systemd-sysv-generator[167788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:03:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:39] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:03:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:39] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:03:40 compute-0 sudo[167755]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78005050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:40 compute-0 sudo[167944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upyfvltlvwukqznefotraqugkrylbubu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162620.3188887-684-77552775721377/AnsiballZ_command.py'
Jan 23 10:03:40 compute-0 sudo[167944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:40 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:40.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:40 compute-0 python3.9[167946]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:03:40 compute-0 sudo[167944]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:40.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:41 compute-0 sudo[168097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zezndpdhtvdvijkbgdbszjamfztyawka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162621.1161005-684-220770979237115/AnsiballZ_command.py'
Jan 23 10:03:41 compute-0 sudo[168097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:41 compute-0 python3.9[168099]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:03:41 compute-0 sudo[168097]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:41 compute-0 ceph-mon[74335]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:03:42 compute-0 sudo[168252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyvodzpqjurtgsqllfvjtmbldnnibjda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162621.8177752-684-171171848433298/AnsiballZ_command.py'
Jan 23 10:03:42 compute-0 sudo[168252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:42 compute-0 python3.9[168254]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:03:42 compute-0 sudo[168252]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:42 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78005050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:42 compute-0 ceph-mon[74335]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:03:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:42.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:42.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:42 compute-0 sudo[168405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqkpjkkeltbxlrezphggxvbfxtdcffdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162622.7061312-684-133787657189750/AnsiballZ_command.py'
Jan 23 10:03:42 compute-0 sudo[168405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:43 compute-0 python3.9[168407]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:03:43 compute-0 sudo[168405]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:43 compute-0 sudo[168558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuhxrtrgotngesxcswpgaoeabdcwohwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162623.3619545-684-164019327435331/AnsiballZ_command.py'
Jan 23 10:03:43 compute-0 sudo[168558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:43 compute-0 podman[168560]: 2026-01-23 10:03:43.757020917 +0000 UTC m=+0.097194026 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 23 10:03:43 compute-0 python3.9[168561]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:03:43 compute-0 sudo[168558]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e70001270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:44 compute-0 sudo[168740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgabfzwadrgmkgxtjsncsapjkyveoqyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162624.0224273-684-270013543288052/AnsiballZ_command.py'
Jan 23 10:03:44 compute-0 sudo[168740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e9c003d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:44 compute-0 python3.9[168742]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:03:44 compute-0 sudo[168740]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:44 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7ea40094f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:03:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:03:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:03:44 compute-0 sudo[168893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgvfpmsmtuhkiraaisbtngzkbnxzknkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162624.6813135-684-142255054693509/AnsiballZ_command.py'
Jan 23 10:03:44 compute-0 sudo[168893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:44.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:45 compute-0 python3.9[168895]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:03:45 compute-0 sudo[168893]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:45 compute-0 ceph-mon[74335]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[114178]: 23/01/2026 10:03:46 : epoch 697345ce : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e78005050 fd 39 proxy ignored for local
Jan 23 10:03:46 compute-0 kernel: ganesha.nfsd[161494]: segfault at 50 ip 00007f7f2768332e sp 00007f7e8bffe210 error 4 in libntirpc.so.5.8[7f7f27668000+2c000] likely on CPU 7 (core 0, socket 7)
Jan 23 10:03:46 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 10:03:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:03:46 compute-0 systemd[1]: Started Process Core Dump (PID 168923/UID 0).
Jan 23 10:03:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:46.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:46.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:03:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:46.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:03:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:46.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:47 compute-0 sudo[169050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzgkwaqumxvlhszecqjoxqlvblaqmjdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162626.8508358-846-80312594330337/AnsiballZ_getent.py'
Jan 23 10:03:47 compute-0 sudo[169050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:47 compute-0 ceph-mon[74335]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:03:47 compute-0 python3.9[169052]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 23 10:03:47 compute-0 sudo[169050]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:48 compute-0 sudo[169205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvlrbgsarvukpomhqkritrfdqafnxvzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162627.7373812-870-41403870725403/AnsiballZ_group.py'
Jan 23 10:03:48 compute-0 sudo[169205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:48 compute-0 python3.9[169207]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 10:03:48 compute-0 groupadd[169208]: group added to /etc/group: name=libvirt, GID=42473
Jan 23 10:03:48 compute-0 groupadd[169208]: group added to /etc/gshadow: name=libvirt
Jan 23 10:03:48 compute-0 groupadd[169208]: new group: name=libvirt, GID=42473
Jan 23 10:03:48 compute-0 sudo[169205]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:48 compute-0 systemd-coredump[168924]: Process 114206 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 79:
                                                    #0  0x00007f7f2768332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    #1  0x0000000000000000 n/a (n/a + 0x0)
                                                    #2  0x00007f7f2768d900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 10:03:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:48.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:48 compute-0 systemd[1]: systemd-coredump@3-168923-0.service: Deactivated successfully.
Jan 23 10:03:48 compute-0 systemd[1]: systemd-coredump@3-168923-0.service: Consumed 2.551s CPU time.
Jan 23 10:03:48 compute-0 podman[169294]: 2026-01-23 10:03:48.889826835 +0000 UTC m=+0.032142059 container died fd6798f798f784b8073748ebca1512c31bc5cd772166271c0ff2bdccb06fff0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-234261bf31c5ff356861e77739030c2e0cfbf1528b7a3958450618a58a05171d-merged.mount: Deactivated successfully.
Jan 23 10:03:48 compute-0 podman[169294]: 2026-01-23 10:03:48.937929858 +0000 UTC m=+0.080245062 container remove fd6798f798f784b8073748ebca1512c31bc5cd772166271c0ff2bdccb06fff0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:03:48 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 10:03:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:48.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:49 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:03:49 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 2.777s CPU time.
Jan 23 10:03:49 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:03:49 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:03:49 compute-0 sudo[169411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyuuxkifmiryfdcupummucsbouuixkci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162628.7720153-894-199805934737484/AnsiballZ_user.py'
Jan 23 10:03:49 compute-0 sudo[169411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:49 compute-0 python3.9[169413]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 10:03:49 compute-0 ceph-mon[74335]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:49 compute-0 useradd[169415]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 23 10:03:49 compute-0 sudo[169411]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:49] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:03:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:49] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:03:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:03:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:03:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:03:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:03:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:03:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:03:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:03:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:03:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:50 compute-0 sudo[169573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijowrjrufbyktknwcjfgsbfvnsvlfhtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162630.107864-927-124212879042737/AnsiballZ_setup.py'
Jan 23 10:03:50 compute-0 sudo[169573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:03:50 compute-0 python3.9[169575]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 10:03:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:50.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:50.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:51 compute-0 sudo[169573]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:51 compute-0 sudo[169657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiekjyxyvjasurffkelyozueblkyqnxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162630.107864-927-124212879042737/AnsiballZ_dnf.py'
Jan 23 10:03:51 compute-0 sudo[169657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:03:51 compute-0 ceph-mon[74335]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:51 compute-0 python3.9[169659]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 10:03:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:03:52 compute-0 ceph-mon[74335]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:03:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:52.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:03:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:52.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:03:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100354 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:03:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:54.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:54.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:55 compute-0 ceph-mon[74335]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:03:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:03:56.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:03:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:57.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:57 compute-0 ceph-mon[74335]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:03:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:03:58 compute-0 podman[169679]: 2026-01-23 10:03:58.534300952 +0000 UTC m=+0.054319702 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:03:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:03:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:58.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:03:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:03:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:03:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:03:59 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 4.
Jan 23 10:03:59 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:03:59 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 2.777s CPU time.
Jan 23 10:03:59 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 10:03:59 compute-0 sudo[169697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:03:59 compute-0 sudo[169697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:03:59 compute-0 sudo[169697]: pam_unix(sudo:session): session closed for user root
Jan 23 10:03:59 compute-0 podman[169770]: 2026-01-23 10:03:59.417084507 +0000 UTC m=+0.026781435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:03:59 compute-0 ceph-mon[74335]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:03:59 compute-0 podman[169770]: 2026-01-23 10:03:59.661895794 +0000 UTC m=+0.271592692 container create 10c1cfbfbb12f0570171b321b4b581c117e271bd522e9a01829d5630c9094011 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 23 10:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b7b83275ed58e6758441bbec1d92d2e21fd04bc55b84eb13f7e4f5b2285cd3/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b7b83275ed58e6758441bbec1d92d2e21fd04bc55b84eb13f7e4f5b2285cd3/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b7b83275ed58e6758441bbec1d92d2e21fd04bc55b84eb13f7e4f5b2285cd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b7b83275ed58e6758441bbec1d92d2e21fd04bc55b84eb13f7e4f5b2285cd3/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:03:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:59.752 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:03:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:59.754 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:03:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:03:59.754 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:03:59 compute-0 podman[169770]: 2026-01-23 10:03:59.797171435 +0000 UTC m=+0.406868353 container init 10c1cfbfbb12f0570171b321b4b581c117e271bd522e9a01829d5630c9094011 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 10:03:59 compute-0 podman[169770]: 2026-01-23 10:03:59.802287711 +0000 UTC m=+0.411984599 container start 10c1cfbfbb12f0570171b321b4b581c117e271bd522e9a01829d5630c9094011 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:03:59 compute-0 bash[169770]: 10c1cfbfbb12f0570171b321b4b581c117e271bd522e9a01829d5630c9094011
Jan 23 10:03:59 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:03:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:03:59 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 10:03:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:03:59 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 10:03:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:03:59 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 10:03:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:03:59 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 10:03:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:03:59 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 10:03:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:03:59 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 10:03:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:03:59 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 10:03:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:59] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:03:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:03:59] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:03:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:03:59 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:04:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:04:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100400 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:04:00 compute-0 ceph-mon[74335]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:04:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:00.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:01.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:04:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:02.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:03 compute-0 ceph-mon[74335]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:04:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:04:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:04.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:05.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:04:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:04:05 compute-0 ceph-mon[74335]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:04:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:04:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:06 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:04:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:06 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:04:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 10:04:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:06.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:06.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:04:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:07.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:07 compute-0 ceph-mon[74335]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 10:04:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 10:04:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 10:04:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:08.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 10:04:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:09.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:09 compute-0 ceph-mon[74335]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 10:04:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:09] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:04:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:09] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:04:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 10:04:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:10.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:11 compute-0 ceph-mon[74335]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:04:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b74000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:12 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:12.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:13 compute-0 ceph-mon[74335]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:04:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:04:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:14 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:14 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b74000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:14 compute-0 podman[170036]: 2026-01-23 10:04:14.615681154 +0000 UTC m=+0.144817654 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 10:04:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100414 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:04:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:14 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b74000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 10:04:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:14.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 10:04:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:15.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:15 : epoch 6973478f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:04:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:15 : epoch 6973478f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:04:15 compute-0 ceph-mon[74335]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:04:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Jan 23 10:04:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:16 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:16 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:16 compute-0 ceph-mon[74335]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Jan 23 10:04:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:16 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:16.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:16.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:04:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:17.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 23 10:04:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:18 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:18 : epoch 6973478f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:04:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:18 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:18 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:18.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:19.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:19 compute-0 ceph-mon[74335]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 23 10:04:19 compute-0 sudo[170066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:04:19 compute-0 sudo[170066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:19 compute-0 sudo[170066]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:04:19
Jan 23 10:04:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:04:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:04:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'images', 'volumes', '.nfs', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'backups', 'vms']
Jan 23 10:04:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:04:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:19] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:04:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:19] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:04:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:04:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:04:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 23 10:04:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:20 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:04:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:04:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:20 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:20 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:20.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:21.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:21 compute-0 ceph-mon[74335]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 23 10:04:22 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Check health
Jan 23 10:04:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 23 10:04:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100422 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:04:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:22 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:22 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:22 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:22.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:23.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:23 compute-0 ceph-mon[74335]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 23 10:04:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 23 10:04:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:24 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:24 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:24 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:24.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:25.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:25 compute-0 ceph-mon[74335]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 23 10:04:25 compute-0 kernel: SELinux:  Converting 2780 SID table entries...
Jan 23 10:04:25 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 10:04:25 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 10:04:25 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 10:04:25 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 10:04:25 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 10:04:25 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 10:04:25 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 10:04:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 23 10:04:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:26 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:26 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:26 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:26.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:26 compute-0 sudo[170106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:04:26 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 23 10:04:26 compute-0 sudo[170106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:26 compute-0 sudo[170106]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:26 compute-0 sudo[170132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:04:26 compute-0 sudo[170132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:26.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:04:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:26.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:04:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:27.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:27 compute-0 ceph-mon[74335]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 23 10:04:27 compute-0 sudo[170132]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 23 10:04:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 10:04:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 10:04:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 10:04:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:04:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:28 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:28 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 10:04:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 10:04:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:28 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:28.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:29.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:04:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:04:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:29 compute-0 podman[170189]: 2026-01-23 10:04:29.560431305 +0000 UTC m=+0.075620410 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 10:04:29 compute-0 ceph-mon[74335]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:04:29 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:29 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:29] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:04:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:29] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:04:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:04:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:04:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:30 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:04:30 compute-0 sudo[170211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:04:30 compute-0 sudo[170211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:30 compute-0 sudo[170211]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:30 compute-0 sudo[170236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:04:30 compute-0 sudo[170236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:30 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:04:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:04:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:30 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:30.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:30 compute-0 podman[170300]: 2026-01-23 10:04:30.899776521 +0000 UTC m=+0.054444525 container create 3c9328639b013bae2daf3730045b33d0667751445f5ff2f83cf987875f38e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 10:04:30 compute-0 systemd[1]: Started libpod-conmon-3c9328639b013bae2daf3730045b33d0667751445f5ff2f83cf987875f38e9f3.scope.
Jan 23 10:04:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:04:30 compute-0 podman[170300]: 2026-01-23 10:04:30.875097886 +0000 UTC m=+0.029765870 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:04:30 compute-0 podman[170300]: 2026-01-23 10:04:30.987953887 +0000 UTC m=+0.142621881 container init 3c9328639b013bae2daf3730045b33d0667751445f5ff2f83cf987875f38e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:04:30 compute-0 podman[170300]: 2026-01-23 10:04:30.999032204 +0000 UTC m=+0.153700168 container start 3c9328639b013bae2daf3730045b33d0667751445f5ff2f83cf987875f38e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:04:31 compute-0 podman[170300]: 2026-01-23 10:04:31.002793561 +0000 UTC m=+0.157461555 container attach 3c9328639b013bae2daf3730045b33d0667751445f5ff2f83cf987875f38e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 10:04:31 compute-0 jolly_cerf[170316]: 167 167
Jan 23 10:04:31 compute-0 systemd[1]: libpod-3c9328639b013bae2daf3730045b33d0667751445f5ff2f83cf987875f38e9f3.scope: Deactivated successfully.
Jan 23 10:04:31 compute-0 podman[170300]: 2026-01-23 10:04:31.009832412 +0000 UTC m=+0.164500376 container died 3c9328639b013bae2daf3730045b33d0667751445f5ff2f83cf987875f38e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d9c6da4d2d068fcf44b645d7e854ce462624f7dd5586b91c8229bf586b89ff3-merged.mount: Deactivated successfully.
Jan 23 10:04:31 compute-0 podman[170300]: 2026-01-23 10:04:31.052212651 +0000 UTC m=+0.206880615 container remove 3c9328639b013bae2daf3730045b33d0667751445f5ff2f83cf987875f38e9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_cerf, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 10:04:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:31.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:31 compute-0 systemd[1]: libpod-conmon-3c9328639b013bae2daf3730045b33d0667751445f5ff2f83cf987875f38e9f3.scope: Deactivated successfully.
Jan 23 10:04:31 compute-0 podman[170340]: 2026-01-23 10:04:31.236241944 +0000 UTC m=+0.049351790 container create abaeecf3f6bf158c0cd68ae6864665e419cc7259367f3605cec595b7b7564d77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:04:31 compute-0 systemd[1]: Started libpod-conmon-abaeecf3f6bf158c0cd68ae6864665e419cc7259367f3605cec595b7b7564d77.scope.
Jan 23 10:04:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:04:31 compute-0 podman[170340]: 2026-01-23 10:04:31.216785958 +0000 UTC m=+0.029895824 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c5204df1148633b005e5d74546471a8b90a66f0584f348804a000b52f56a58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c5204df1148633b005e5d74546471a8b90a66f0584f348804a000b52f56a58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c5204df1148633b005e5d74546471a8b90a66f0584f348804a000b52f56a58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c5204df1148633b005e5d74546471a8b90a66f0584f348804a000b52f56a58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c5204df1148633b005e5d74546471a8b90a66f0584f348804a000b52f56a58/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:31 compute-0 podman[170340]: 2026-01-23 10:04:31.35315143 +0000 UTC m=+0.166261306 container init abaeecf3f6bf158c0cd68ae6864665e419cc7259367f3605cec595b7b7564d77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:04:31 compute-0 podman[170340]: 2026-01-23 10:04:31.360262003 +0000 UTC m=+0.173371849 container start abaeecf3f6bf158c0cd68ae6864665e419cc7259367f3605cec595b7b7564d77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_ramanujan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:04:31 compute-0 podman[170340]: 2026-01-23 10:04:31.365316218 +0000 UTC m=+0.178426094 container attach abaeecf3f6bf158c0cd68ae6864665e419cc7259367f3605cec595b7b7564d77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:04:31 compute-0 awesome_ramanujan[170357]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:04:31 compute-0 awesome_ramanujan[170357]: --> All data devices are unavailable
Jan 23 10:04:31 compute-0 systemd[1]: libpod-abaeecf3f6bf158c0cd68ae6864665e419cc7259367f3605cec595b7b7564d77.scope: Deactivated successfully.
Jan 23 10:04:31 compute-0 podman[170340]: 2026-01-23 10:04:31.77219297 +0000 UTC m=+0.585302826 container died abaeecf3f6bf158c0cd68ae6864665e419cc7259367f3605cec595b7b7564d77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_ramanujan, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9c5204df1148633b005e5d74546471a8b90a66f0584f348804a000b52f56a58-merged.mount: Deactivated successfully.
Jan 23 10:04:31 compute-0 podman[170340]: 2026-01-23 10:04:31.820072117 +0000 UTC m=+0.633181963 container remove abaeecf3f6bf158c0cd68ae6864665e419cc7259367f3605cec595b7b7564d77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_ramanujan, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 10:04:31 compute-0 systemd[1]: libpod-conmon-abaeecf3f6bf158c0cd68ae6864665e419cc7259367f3605cec595b7b7564d77.scope: Deactivated successfully.
Jan 23 10:04:31 compute-0 sudo[170236]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:31 compute-0 sudo[170383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:04:31 compute-0 sudo[170383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:31 compute-0 sudo[170383]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:31 compute-0 sudo[170408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:04:32 compute-0 sudo[170408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:04:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:32 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b740023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:32 compute-0 podman[170473]: 2026-01-23 10:04:32.415405848 +0000 UTC m=+0.045129069 container create f2c82453f0142fa87326649c9f765546e79f7de3f702f3c7cc383f2f6e1ad8de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_rhodes, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:04:32 compute-0 systemd[1]: Started libpod-conmon-f2c82453f0142fa87326649c9f765546e79f7de3f702f3c7cc383f2f6e1ad8de.scope.
Jan 23 10:04:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:04:32 compute-0 podman[170473]: 2026-01-23 10:04:32.489214915 +0000 UTC m=+0.118938156 container init f2c82453f0142fa87326649c9f765546e79f7de3f702f3c7cc383f2f6e1ad8de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:04:32 compute-0 podman[170473]: 2026-01-23 10:04:32.398542047 +0000 UTC m=+0.028265288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:04:32 compute-0 podman[170473]: 2026-01-23 10:04:32.496189414 +0000 UTC m=+0.125912625 container start f2c82453f0142fa87326649c9f765546e79f7de3f702f3c7cc383f2f6e1ad8de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:04:32 compute-0 podman[170473]: 2026-01-23 10:04:32.499466727 +0000 UTC m=+0.129189958 container attach f2c82453f0142fa87326649c9f765546e79f7de3f702f3c7cc383f2f6e1ad8de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 23 10:04:32 compute-0 agitated_rhodes[170489]: 167 167
Jan 23 10:04:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:32 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:32 compute-0 systemd[1]: libpod-f2c82453f0142fa87326649c9f765546e79f7de3f702f3c7cc383f2f6e1ad8de.scope: Deactivated successfully.
Jan 23 10:04:32 compute-0 podman[170473]: 2026-01-23 10:04:32.502226306 +0000 UTC m=+0.131949527 container died f2c82453f0142fa87326649c9f765546e79f7de3f702f3c7cc383f2f6e1ad8de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_rhodes, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 10:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee099652d87030a86b8a076cf3ad99db16164be6ffaa5ca83a7a45be41e36268-merged.mount: Deactivated successfully.
Jan 23 10:04:32 compute-0 podman[170473]: 2026-01-23 10:04:32.577056382 +0000 UTC m=+0.206779603 container remove f2c82453f0142fa87326649c9f765546e79f7de3f702f3c7cc383f2f6e1ad8de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:04:32 compute-0 systemd[1]: libpod-conmon-f2c82453f0142fa87326649c9f765546e79f7de3f702f3c7cc383f2f6e1ad8de.scope: Deactivated successfully.
Jan 23 10:04:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:32 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:32 compute-0 podman[170515]: 2026-01-23 10:04:32.74415798 +0000 UTC m=+0.047990001 container create 636f28c12b0ada4fa06d83aa6f19a29d11fe73d7f80cca91410bd06fbb05d1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chaplygin, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 10:04:32 compute-0 systemd[1]: Started libpod-conmon-636f28c12b0ada4fa06d83aa6f19a29d11fe73d7f80cca91410bd06fbb05d1df.scope.
Jan 23 10:04:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284118b94c4a5e8b48e4728f30732e7bc04eafb503f0f6879ec9318f036fb199/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284118b94c4a5e8b48e4728f30732e7bc04eafb503f0f6879ec9318f036fb199/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284118b94c4a5e8b48e4728f30732e7bc04eafb503f0f6879ec9318f036fb199/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284118b94c4a5e8b48e4728f30732e7bc04eafb503f0f6879ec9318f036fb199/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:32 compute-0 podman[170515]: 2026-01-23 10:04:32.72277948 +0000 UTC m=+0.026611541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:04:32 compute-0 podman[170515]: 2026-01-23 10:04:32.832976735 +0000 UTC m=+0.136808786 container init 636f28c12b0ada4fa06d83aa6f19a29d11fe73d7f80cca91410bd06fbb05d1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 23 10:04:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:32.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:32 compute-0 podman[170515]: 2026-01-23 10:04:32.842178767 +0000 UTC m=+0.146010798 container start 636f28c12b0ada4fa06d83aa6f19a29d11fe73d7f80cca91410bd06fbb05d1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 10:04:32 compute-0 podman[170515]: 2026-01-23 10:04:32.849018063 +0000 UTC m=+0.152850124 container attach 636f28c12b0ada4fa06d83aa6f19a29d11fe73d7f80cca91410bd06fbb05d1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chaplygin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:04:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]: {
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:     "1": [
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:         {
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "devices": [
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "/dev/loop3"
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             ],
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "lv_name": "ceph_lv0",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "lv_size": "21470642176",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "name": "ceph_lv0",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "tags": {
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.cluster_name": "ceph",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.crush_device_class": "",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.encrypted": "0",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.osd_id": "1",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.type": "block",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.vdo": "0",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:                 "ceph.with_tpm": "0"
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             },
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "type": "block",
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:             "vg_name": "ceph_vg0"
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:         }
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]:     ]
Jan 23 10:04:33 compute-0 quizzical_chaplygin[170531]: }
Jan 23 10:04:33 compute-0 systemd[1]: libpod-636f28c12b0ada4fa06d83aa6f19a29d11fe73d7f80cca91410bd06fbb05d1df.scope: Deactivated successfully.
Jan 23 10:04:33 compute-0 podman[170515]: 2026-01-23 10:04:33.16427339 +0000 UTC m=+0.468105421 container died 636f28c12b0ada4fa06d83aa6f19a29d11fe73d7f80cca91410bd06fbb05d1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Jan 23 10:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-284118b94c4a5e8b48e4728f30732e7bc04eafb503f0f6879ec9318f036fb199-merged.mount: Deactivated successfully.
Jan 23 10:04:33 compute-0 podman[170515]: 2026-01-23 10:04:33.230299495 +0000 UTC m=+0.534131516 container remove 636f28c12b0ada4fa06d83aa6f19a29d11fe73d7f80cca91410bd06fbb05d1df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chaplygin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 10:04:33 compute-0 systemd[1]: libpod-conmon-636f28c12b0ada4fa06d83aa6f19a29d11fe73d7f80cca91410bd06fbb05d1df.scope: Deactivated successfully.
Jan 23 10:04:33 compute-0 sudo[170408]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:33 compute-0 ceph-mon[74335]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:04:33 compute-0 sudo[170552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:04:33 compute-0 sudo[170552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:33 compute-0 sudo[170552]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:33 compute-0 sudo[170577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:04:33 compute-0 sudo[170577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:33 compute-0 podman[170641]: 2026-01-23 10:04:33.879751271 +0000 UTC m=+0.042917686 container create 8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_perlman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Jan 23 10:04:33 compute-0 systemd[1]: Started libpod-conmon-8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710.scope.
Jan 23 10:04:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:04:33 compute-0 podman[170641]: 2026-01-23 10:04:33.95715998 +0000 UTC m=+0.120326425 container init 8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 10:04:33 compute-0 podman[170641]: 2026-01-23 10:04:33.86291839 +0000 UTC m=+0.026084835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:04:33 compute-0 podman[170641]: 2026-01-23 10:04:33.964491489 +0000 UTC m=+0.127657904 container start 8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 10:04:33 compute-0 suspicious_perlman[170657]: 167 167
Jan 23 10:04:33 compute-0 systemd[1]: libpod-8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710.scope: Deactivated successfully.
Jan 23 10:04:33 compute-0 podman[170641]: 2026-01-23 10:04:33.970546972 +0000 UTC m=+0.133713407 container attach 8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_perlman, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:04:33 compute-0 conmon[170657]: conmon 8e1db78d7264350bf2b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710.scope/container/memory.events
Jan 23 10:04:33 compute-0 podman[170641]: 2026-01-23 10:04:33.971855179 +0000 UTC m=+0.135021594 container died 8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 10:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c66eedcb987bc212b3daa80e62e68ac87f818f38114c3d321d9f6429fceec325-merged.mount: Deactivated successfully.
Jan 23 10:04:34 compute-0 podman[170641]: 2026-01-23 10:04:34.009953577 +0000 UTC m=+0.173119992 container remove 8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 10:04:34 compute-0 systemd[1]: libpod-conmon-8e1db78d7264350bf2b6d619cddf49044918cd178a9e81e1f40d45860d3f2710.scope: Deactivated successfully.
Jan 23 10:04:34 compute-0 podman[170682]: 2026-01-23 10:04:34.173822864 +0000 UTC m=+0.045587332 container create a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:04:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:34 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:34 compute-0 systemd[1]: Started libpod-conmon-a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01.scope.
Jan 23 10:04:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a5b448852ce50e95c0e899214b59a48d473b559117f163c75b5c98969f7a8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a5b448852ce50e95c0e899214b59a48d473b559117f163c75b5c98969f7a8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:34 compute-0 podman[170682]: 2026-01-23 10:04:34.156722616 +0000 UTC m=+0.028487084 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a5b448852ce50e95c0e899214b59a48d473b559117f163c75b5c98969f7a8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a5b448852ce50e95c0e899214b59a48d473b559117f163c75b5c98969f7a8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:04:34 compute-0 podman[170682]: 2026-01-23 10:04:34.265521531 +0000 UTC m=+0.137286019 container init a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 10:04:34 compute-0 podman[170682]: 2026-01-23 10:04:34.277911274 +0000 UTC m=+0.149675742 container start a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:04:34 compute-0 podman[170682]: 2026-01-23 10:04:34.283270997 +0000 UTC m=+0.155035495 container attach a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:04:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:34 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:34 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:35 compute-0 lvm[170772]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:04:35 compute-0 lvm[170772]: VG ceph_vg0 finished
Jan 23 10:04:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:04:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:04:35 compute-0 trusting_ishizaka[170698]: {}
Jan 23 10:04:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:35 compute-0 systemd[1]: libpod-a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01.scope: Deactivated successfully.
Jan 23 10:04:35 compute-0 systemd[1]: libpod-a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01.scope: Consumed 1.259s CPU time.
Jan 23 10:04:35 compute-0 podman[170682]: 2026-01-23 10:04:35.100746619 +0000 UTC m=+0.972511087 container died a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:04:35 compute-0 ceph-mon[74335]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3a5b448852ce50e95c0e899214b59a48d473b559117f163c75b5c98969f7a8b-merged.mount: Deactivated successfully.
Jan 23 10:04:35 compute-0 podman[170682]: 2026-01-23 10:04:35.457663656 +0000 UTC m=+1.329428124 container remove a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:04:35 compute-0 systemd[1]: libpod-conmon-a0a86e09da30cb5eacccf02a6d31e155952980cb53cad5075ce241f05ec7dc01.scope: Deactivated successfully.
Jan 23 10:04:35 compute-0 sudo[170577]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:04:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:04:35 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:35 compute-0 sudo[170787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:04:35 compute-0 sudo[170787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:35 compute-0 sudo[170787]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:36 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:36 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:04:36 compute-0 ceph-mon[74335]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:36 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:36.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:36.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:04:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:36.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:04:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:38 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:38 compute-0 kernel: SELinux:  Converting 2780 SID table entries...
Jan 23 10:04:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 10:04:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 10:04:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 10:04:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 10:04:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 10:04:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 10:04:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 10:04:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:38 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:38 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:38.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:39 compute-0 sudo[170823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:04:39 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 23 10:04:39 compute-0 sudo[170823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:39 compute-0 sudo[170823]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:39] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:04:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:39] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:04:40 compute-0 ceph-mon[74335]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:40 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:40 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:40 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:04:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:42 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:42 compute-0 ceph-mon[74335]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:42 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:42 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:42.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:43 compute-0 ceph-mon[74335]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:04:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:44 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:44 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:44 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:44.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:44 compute-0 ceph-mon[74335]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:45.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:45 compute-0 podman[170854]: 2026-01-23 10:04:45.612885711 +0000 UTC m=+0.114253782 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 10:04:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:46 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:46 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:46 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:46.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:46.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:04:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:46.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:04:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:46.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:04:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:47.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:47 compute-0 ceph-mon[74335]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:48 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:48 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=cleanup t=2026-01-23T10:04:48.651577607Z level=info msg="Completed cleanup jobs" duration=24.8769ms
Jan 23 10:04:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:48 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=plugins.update.checker t=2026-01-23T10:04:48.748850433Z level=info msg="Update check succeeded" duration=55.139624ms
Jan 23 10:04:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=grafana.update.checker t=2026-01-23T10:04:48.756429149Z level=info msg="Update check succeeded" duration=58.804318ms
Jan 23 10:04:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:49.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:49 compute-0 ceph-mon[74335]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:49] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:04:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:49] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:04:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:04:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:04:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:04:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:04:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:04:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:04:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:04:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:04:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:04:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:50 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:50 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:50 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:04:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:50.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:04:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:04:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:51.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:04:51 compute-0 ceph-mon[74335]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:04:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:52 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:52 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b64001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:52 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:04:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:04:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:04:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:04:53 compute-0 ceph-mon[74335]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:04:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:54 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:54 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:54 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:04:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:54.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:04:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:55.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:55 compute-0 ceph-mon[74335]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:04:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:56 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:56 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b70000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:56 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:56 compute-0 ceph-mon[74335]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:56.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:04:56.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:04:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:04:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:57.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:04:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:58 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:58 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:04:58 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:04:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:04:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:04:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:04:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:04:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:59.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:04:59 compute-0 ceph-mon[74335]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:04:59 compute-0 sudo[175136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:04:59 compute-0 sudo[175136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:04:59 compute-0 sudo[175136]: pam_unix(sudo:session): session closed for user root
Jan 23 10:04:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:04:59.754 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:04:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:04:59.755 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:04:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:04:59.756 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:04:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:59] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:04:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:04:59] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:05:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:00 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:00 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:00 compute-0 podman[175738]: 2026-01-23 10:05:00.529492834 +0000 UTC m=+0.059846351 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 10:05:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:00 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:00.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:01 compute-0 ceph-mon[74335]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:05:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:02 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:02 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b70001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:02 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:02.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:03.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:03 compute-0 ceph-mon[74335]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:05:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:04 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:04 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:04 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b50000e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:04.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:05:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:05:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:05.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:06 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:06 compute-0 ceph-mon[74335]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:05:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:06 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:06 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:06.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:05:06.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:05:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:07.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:07 compute-0 ceph-mon[74335]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:08 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b50001940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:08 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:08 compute-0 ceph-mon[74335]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:08 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b78003a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000059s ======
Jan 23 10:05:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:08.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000059s
Jan 23 10:05:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:09.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:09] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:05:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:09] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:05:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:10 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:10 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b50001940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:10 compute-0 ceph-mon[74335]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[169785]: 23/01/2026 10:05:10 : epoch 6973478f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b5c003c10 fd 38 proxy ignored for local
Jan 23 10:05:10 compute-0 kernel: ganesha.nfsd[170032]: segfault at 50 ip 00007f6c001f332e sp 00007f6b68ff8210 error 4 in libntirpc.so.5.8[7f6c001d8000+2c000] likely on CPU 7 (core 0, socket 7)
Jan 23 10:05:10 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 10:05:10 compute-0 systemd[1]: Started Process Core Dump (PID 182429/UID 0).
Jan 23 10:05:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:10.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:11.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:05:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:12.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:13 compute-0 systemd-coredump[182441]: Process 169790 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007f6c001f332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 10:05:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:13.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:13 compute-0 systemd[1]: systemd-coredump@4-182429-0.service: Deactivated successfully.
Jan 23 10:05:13 compute-0 systemd[1]: systemd-coredump@4-182429-0.service: Consumed 1.464s CPU time.
Jan 23 10:05:13 compute-0 podman[184054]: 2026-01-23 10:05:13.18768421 +0000 UTC m=+0.029510483 container died 10c1cfbfbb12f0570171b321b4b581c117e271bd522e9a01829d5630c9094011 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-66b7b83275ed58e6758441bbec1d92d2e21fd04bc55b84eb13f7e4f5b2285cd3-merged.mount: Deactivated successfully.
Jan 23 10:05:13 compute-0 podman[184054]: 2026-01-23 10:05:13.241891751 +0000 UTC m=+0.083718004 container remove 10c1cfbfbb12f0570171b321b4b581c117e271bd522e9a01829d5630c9094011 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 10:05:13 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 10:05:13 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:05:13 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.835s CPU time.
Jan 23 10:05:13 compute-0 ceph-mon[74335]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:05:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:14 compute-0 ceph-mon[74335]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:14.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:15.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:16 compute-0 podman[186205]: 2026-01-23 10:05:16.570015566 +0000 UTC m=+0.097350932 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:05:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:16.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:05:16.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:05:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:17.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:17 compute-0 ceph-mon[74335]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100518 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:05:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:18.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:19.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:19 compute-0 ceph-mon[74335]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:19 compute-0 sudo[187886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:05:19 compute-0 sudo[187886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:19 compute-0 sudo[187886]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:05:19
Jan 23 10:05:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:05:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:05:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'default.rgw.log', 'vms', '.nfs', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes']
Jan 23 10:05:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:05:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:19] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:05:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:19] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:05:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:05:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:05:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:05:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:05:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:20.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:21.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:21 compute-0 ceph-mon[74335]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:05:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:22.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:23.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:23 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 5.
Jan 23 10:05:23 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:05:23 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.835s CPU time.
Jan 23 10:05:23 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 10:05:23 compute-0 ceph-mon[74335]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:05:23 compute-0 podman[187984]: 2026-01-23 10:05:23.754945241 +0000 UTC m=+0.049186592 container create 2b6760de5d44f38fc3fdf786d207211c3e20052bb951e0c67148ef3069fe3e6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7727bf51034fb5175fcd75d062f1bc899f0e220520c765e285e3ae9d73af6bf/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7727bf51034fb5175fcd75d062f1bc899f0e220520c765e285e3ae9d73af6bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7727bf51034fb5175fcd75d062f1bc899f0e220520c765e285e3ae9d73af6bf/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7727bf51034fb5175fcd75d062f1bc899f0e220520c765e285e3ae9d73af6bf/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:23 compute-0 podman[187984]: 2026-01-23 10:05:23.826790759 +0000 UTC m=+0.121032140 container init 2b6760de5d44f38fc3fdf786d207211c3e20052bb951e0c67148ef3069fe3e6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:05:23 compute-0 podman[187984]: 2026-01-23 10:05:23.732330465 +0000 UTC m=+0.026571846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:05:23 compute-0 podman[187984]: 2026-01-23 10:05:23.832415507 +0000 UTC m=+0.126656868 container start 2b6760de5d44f38fc3fdf786d207211c3e20052bb951e0c67148ef3069fe3e6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:05:23 compute-0 bash[187984]: 2b6760de5d44f38fc3fdf786d207211c3e20052bb951e0c67148ef3069fe3e6c
Jan 23 10:05:23 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:05:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:23 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 10:05:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:23 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 10:05:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:23 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 10:05:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:23 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 10:05:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:23 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 10:05:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:23 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 10:05:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:23 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 10:05:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:23 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:05:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:05:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:24.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:25.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:25 compute-0 ceph-mon[74335]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:05:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:05:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:26.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:05:26 compute-0 ceph-mon[74335]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:05:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:05:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:05:26.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:05:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:27.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:28.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:29.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:29 compute-0 ceph-mon[74335]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:29] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:05:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:29] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:05:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:29 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:05:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:29 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:05:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:30 compute-0 ceph-mon[74335]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:30.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:05:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:31.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:05:31 compute-0 podman[188049]: 2026-01-23 10:05:31.531414974 +0000 UTC m=+0.057070958 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 10:05:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:05:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:32.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:33.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:33 compute-0 ceph-mon[74335]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:05:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:05:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:34.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:05:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:05:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:35.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:35 compute-0 kernel: SELinux:  Converting 2781 SID table entries...
Jan 23 10:05:35 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 10:05:35 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 23 10:05:35 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 10:05:35 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 23 10:05:35 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 10:05:35 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 10:05:35 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 10:05:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:36 compute-0 sudo[188079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:05:36 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 23 10:05:36 compute-0 sudo[188079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:36 compute-0 sudo[188079]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:36 compute-0 sudo[188105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:05:36 compute-0 sudo[188105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:05:36 compute-0 ceph-mon[74335]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:05:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:05:36 compute-0 sudo[188105]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:36 compute-0 groupadd[188166]: group added to /etc/group: name=dnsmasq, GID=992
Jan 23 10:05:36 compute-0 groupadd[188166]: group added to /etc/gshadow: name=dnsmasq
Jan 23 10:05:36 compute-0 groupadd[188166]: new group: name=dnsmasq, GID=992
Jan 23 10:05:36 compute-0 useradd[188173]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 23 10:05:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:36.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:36 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 23 10:05:36 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:05:36.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:05:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:37.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 10:05:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:37 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:05:37 compute-0 ceph-mon[74335]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:05:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:05:38 compute-0 groupadd[188199]: group added to /etc/group: name=clevis, GID=991
Jan 23 10:05:38 compute-0 groupadd[188199]: group added to /etc/gshadow: name=clevis
Jan 23 10:05:38 compute-0 groupadd[188199]: new group: name=clevis, GID=991
Jan 23 10:05:38 compute-0 useradd[188206]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 23 10:05:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:38 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc498000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:38 compute-0 usermod[188220]: add 'clevis' to group 'tss'
Jan 23 10:05:38 compute-0 usermod[188220]: add 'clevis' to shadow group 'tss'
Jan 23 10:05:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:38 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:38 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:38.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:39.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:39 compute-0 sudo[188244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:05:39 compute-0 sudo[188244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:39 compute-0 sudo[188244]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:39] "GET /metrics HTTP/1.1" 200 48435 "" "Prometheus/2.51.0"
Jan 23 10:05:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:39] "GET /metrics HTTP/1.1" 200 48435 "" "Prometheus/2.51.0"
Jan 23 10:05:40 compute-0 ceph-mon[74335]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:05:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:05:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:40 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:05:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:05:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:40 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:40 compute-0 polkitd[43358]: Reloading rules
Jan 23 10:05:40 compute-0 polkitd[43358]: Collecting garbage unconditionally...
Jan 23 10:05:40 compute-0 polkitd[43358]: Loading rules from directory /etc/polkit-1/rules.d
Jan 23 10:05:40 compute-0 polkitd[43358]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 23 10:05:40 compute-0 polkitd[43358]: Finished loading, compiling and executing 3 rules
Jan 23 10:05:40 compute-0 polkitd[43358]: Reloading rules
Jan 23 10:05:40 compute-0 polkitd[43358]: Collecting garbage unconditionally...
Jan 23 10:05:40 compute-0 polkitd[43358]: Loading rules from directory /etc/polkit-1/rules.d
Jan 23 10:05:40 compute-0 polkitd[43358]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 23 10:05:40 compute-0 polkitd[43358]: Finished loading, compiling and executing 3 rules
Jan 23 10:05:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:40 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100540 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:05:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:40.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:05:41 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:05:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:05:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:05:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:05:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:05:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:05:41 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:05:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:05:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:05:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:05:41 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:05:41 compute-0 sudo[188366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:05:41 compute-0 sudo[188366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:41 compute-0 sudo[188366]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:41.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:41 compute-0 sudo[188407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:05:41 compute-0 sudo[188407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:41 compute-0 ceph-mon[74335]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:05:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:05:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:05:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:05:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:05:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:05:41 compute-0 podman[188525]: 2026-01-23 10:05:41.647382722 +0000 UTC m=+0.051167051 container create 3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:05:41 compute-0 groupadd[188541]: group added to /etc/group: name=ceph, GID=167
Jan 23 10:05:41 compute-0 groupadd[188541]: group added to /etc/gshadow: name=ceph
Jan 23 10:05:41 compute-0 systemd[1]: Started libpod-conmon-3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e.scope.
Jan 23 10:05:41 compute-0 groupadd[188541]: new group: name=ceph, GID=167
Jan 23 10:05:41 compute-0 podman[188525]: 2026-01-23 10:05:41.624174248 +0000 UTC m=+0.027958587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:05:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:05:41 compute-0 useradd[188552]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 23 10:05:41 compute-0 podman[188525]: 2026-01-23 10:05:41.748122144 +0000 UTC m=+0.151906483 container init 3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:05:41 compute-0 podman[188525]: 2026-01-23 10:05:41.755089732 +0000 UTC m=+0.158874051 container start 3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:05:41 compute-0 systemd[1]: libpod-3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e.scope: Deactivated successfully.
Jan 23 10:05:41 compute-0 pedantic_turing[188546]: 167 167
Jan 23 10:05:41 compute-0 conmon[188546]: conmon 3c85a828d66ebf64f71e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e.scope/container/memory.events
Jan 23 10:05:41 compute-0 podman[188525]: 2026-01-23 10:05:41.764225726 +0000 UTC m=+0.168010065 container attach 3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:05:41 compute-0 podman[188525]: 2026-01-23 10:05:41.764650168 +0000 UTC m=+0.168434487 container died 3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 10:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-23337a4a9f33cc1d96bf048a062788ae11b59ca0fb079e3562ac3d2618544355-merged.mount: Deactivated successfully.
Jan 23 10:05:41 compute-0 podman[188525]: 2026-01-23 10:05:41.807955893 +0000 UTC m=+0.211740212 container remove 3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_turing, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 23 10:05:41 compute-0 systemd[1]: libpod-conmon-3c85a828d66ebf64f71e28678e450f9b0d68943b54499ab0396a3ac399c83b7e.scope: Deactivated successfully.
Jan 23 10:05:41 compute-0 podman[188581]: 2026-01-23 10:05:41.977993917 +0000 UTC m=+0.046531802 container create 9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:05:42 compute-0 systemd[1]: Started libpod-conmon-9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62.scope.
Jan 23 10:05:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:05:42 compute-0 podman[188581]: 2026-01-23 10:05:41.958285878 +0000 UTC m=+0.026823793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7652b3ebbd1ae7fb8aff8a5740578fc38a1e7fce96b989e7ce91de9fbac37ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7652b3ebbd1ae7fb8aff8a5740578fc38a1e7fce96b989e7ce91de9fbac37ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7652b3ebbd1ae7fb8aff8a5740578fc38a1e7fce96b989e7ce91de9fbac37ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7652b3ebbd1ae7fb8aff8a5740578fc38a1e7fce96b989e7ce91de9fbac37ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7652b3ebbd1ae7fb8aff8a5740578fc38a1e7fce96b989e7ce91de9fbac37ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:42 compute-0 podman[188581]: 2026-01-23 10:05:42.085583314 +0000 UTC m=+0.154121209 container init 9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 10:05:42 compute-0 podman[188581]: 2026-01-23 10:05:42.093274364 +0000 UTC m=+0.161812249 container start 9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:05:42 compute-0 podman[188581]: 2026-01-23 10:05:42.096710206 +0000 UTC m=+0.165248091 container attach 9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 23 10:05:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:05:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:42 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:42 compute-0 pedantic_albattani[188598]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:05:42 compute-0 pedantic_albattani[188598]: --> All data devices are unavailable
Jan 23 10:05:42 compute-0 systemd[1]: libpod-9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62.scope: Deactivated successfully.
Jan 23 10:05:42 compute-0 conmon[188598]: conmon 9efff6bb7f9e595e8dc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62.scope/container/memory.events
Jan 23 10:05:42 compute-0 podman[188581]: 2026-01-23 10:05:42.466720549 +0000 UTC m=+0.535258444 container died 9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 10:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7652b3ebbd1ae7fb8aff8a5740578fc38a1e7fce96b989e7ce91de9fbac37ab-merged.mount: Deactivated successfully.
Jan 23 10:05:42 compute-0 podman[188581]: 2026-01-23 10:05:42.508916961 +0000 UTC m=+0.577454846 container remove 9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:05:42 compute-0 systemd[1]: libpod-conmon-9efff6bb7f9e595e8dc8e20811e5dbece9daf4701da4e7ee3335038e93ea9c62.scope: Deactivated successfully.
Jan 23 10:05:42 compute-0 sudo[188407]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:42 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:42 compute-0 sudo[188626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:05:42 compute-0 sudo[188626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:42 compute-0 sudo[188626]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:42 compute-0 sudo[188651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:05:42 compute-0 sudo[188651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:42 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:42.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:43 compute-0 podman[188718]: 2026-01-23 10:05:43.105688013 +0000 UTC m=+0.039509722 container create 20ec54bbc0321c6bcbc70745103cbb4eeb8e165728e96044c917381b94e10603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sutherland, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:05:43 compute-0 systemd[1]: Started libpod-conmon-20ec54bbc0321c6bcbc70745103cbb4eeb8e165728e96044c917381b94e10603.scope.
Jan 23 10:05:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:05:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:43.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:43 compute-0 podman[188718]: 2026-01-23 10:05:43.088336965 +0000 UTC m=+0.022158694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:05:43 compute-0 podman[188718]: 2026-01-23 10:05:43.187002895 +0000 UTC m=+0.120824624 container init 20ec54bbc0321c6bcbc70745103cbb4eeb8e165728e96044c917381b94e10603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:05:43 compute-0 podman[188718]: 2026-01-23 10:05:43.195006804 +0000 UTC m=+0.128828513 container start 20ec54bbc0321c6bcbc70745103cbb4eeb8e165728e96044c917381b94e10603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sutherland, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:05:43 compute-0 podman[188718]: 2026-01-23 10:05:43.19855708 +0000 UTC m=+0.132378809 container attach 20ec54bbc0321c6bcbc70745103cbb4eeb8e165728e96044c917381b94e10603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:05:43 compute-0 beautiful_sutherland[188734]: 167 167
Jan 23 10:05:43 compute-0 systemd[1]: libpod-20ec54bbc0321c6bcbc70745103cbb4eeb8e165728e96044c917381b94e10603.scope: Deactivated successfully.
Jan 23 10:05:43 compute-0 podman[188718]: 2026-01-23 10:05:43.200733795 +0000 UTC m=+0.134555514 container died 20ec54bbc0321c6bcbc70745103cbb4eeb8e165728e96044c917381b94e10603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec4e3a5692e897389eeed087a879148b3e3250503d7c519157eea368dc28664-merged.mount: Deactivated successfully.
Jan 23 10:05:43 compute-0 podman[188718]: 2026-01-23 10:05:43.242538935 +0000 UTC m=+0.176360644 container remove 20ec54bbc0321c6bcbc70745103cbb4eeb8e165728e96044c917381b94e10603 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:05:43 compute-0 systemd[1]: libpod-conmon-20ec54bbc0321c6bcbc70745103cbb4eeb8e165728e96044c917381b94e10603.scope: Deactivated successfully.
Jan 23 10:05:43 compute-0 podman[188758]: 2026-01-23 10:05:43.410997392 +0000 UTC m=+0.051466650 container create 0748d8487c84c276c82fc6b5ca8383f056b402e9876053e2affc0a146bb9af2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 10:05:43 compute-0 ceph-mon[74335]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:05:43 compute-0 systemd[1]: Started libpod-conmon-0748d8487c84c276c82fc6b5ca8383f056b402e9876053e2affc0a146bb9af2b.scope.
Jan 23 10:05:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:05:43 compute-0 podman[188758]: 2026-01-23 10:05:43.384649054 +0000 UTC m=+0.025118332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2114bc25b6fd4c0995694364d1d2996ec1de6ee5a430ac6b633879dbc531bc8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2114bc25b6fd4c0995694364d1d2996ec1de6ee5a430ac6b633879dbc531bc8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2114bc25b6fd4c0995694364d1d2996ec1de6ee5a430ac6b633879dbc531bc8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2114bc25b6fd4c0995694364d1d2996ec1de6ee5a430ac6b633879dbc531bc8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:43 compute-0 podman[188758]: 2026-01-23 10:05:43.503795986 +0000 UTC m=+0.144265244 container init 0748d8487c84c276c82fc6b5ca8383f056b402e9876053e2affc0a146bb9af2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Jan 23 10:05:43 compute-0 podman[188758]: 2026-01-23 10:05:43.511198407 +0000 UTC m=+0.151667655 container start 0748d8487c84c276c82fc6b5ca8383f056b402e9876053e2affc0a146bb9af2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 10:05:43 compute-0 podman[188758]: 2026-01-23 10:05:43.514262779 +0000 UTC m=+0.154732037 container attach 0748d8487c84c276c82fc6b5ca8383f056b402e9876053e2affc0a146bb9af2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:05:43 compute-0 bold_poitras[188774]: {
Jan 23 10:05:43 compute-0 bold_poitras[188774]:     "1": [
Jan 23 10:05:43 compute-0 bold_poitras[188774]:         {
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "devices": [
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "/dev/loop3"
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             ],
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "lv_name": "ceph_lv0",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "lv_size": "21470642176",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "name": "ceph_lv0",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "tags": {
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.cluster_name": "ceph",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.crush_device_class": "",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.encrypted": "0",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.osd_id": "1",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.type": "block",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.vdo": "0",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:                 "ceph.with_tpm": "0"
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             },
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "type": "block",
Jan 23 10:05:43 compute-0 bold_poitras[188774]:             "vg_name": "ceph_vg0"
Jan 23 10:05:43 compute-0 bold_poitras[188774]:         }
Jan 23 10:05:43 compute-0 bold_poitras[188774]:     ]
Jan 23 10:05:43 compute-0 bold_poitras[188774]: }
Jan 23 10:05:43 compute-0 systemd[1]: libpod-0748d8487c84c276c82fc6b5ca8383f056b402e9876053e2affc0a146bb9af2b.scope: Deactivated successfully.
Jan 23 10:05:43 compute-0 podman[188758]: 2026-01-23 10:05:43.865738248 +0000 UTC m=+0.506207516 container died 0748d8487c84c276c82fc6b5ca8383f056b402e9876053e2affc0a146bb9af2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2114bc25b6fd4c0995694364d1d2996ec1de6ee5a430ac6b633879dbc531bc8f-merged.mount: Deactivated successfully.
Jan 23 10:05:43 compute-0 podman[188758]: 2026-01-23 10:05:43.907388483 +0000 UTC m=+0.547857731 container remove 0748d8487c84c276c82fc6b5ca8383f056b402e9876053e2affc0a146bb9af2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 10:05:43 compute-0 systemd[1]: libpod-conmon-0748d8487c84c276c82fc6b5ca8383f056b402e9876053e2affc0a146bb9af2b.scope: Deactivated successfully.
Jan 23 10:05:43 compute-0 sudo[188651]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:44 compute-0 sudo[188796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:05:44 compute-0 sudo[188796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:44 compute-0 sudo[188796]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:44 compute-0 sudo[188823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:05:44 compute-0 sudo[188823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:44 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:44 compute-0 podman[189108]: 2026-01-23 10:05:44.503468454 +0000 UTC m=+0.042206643 container create 4967d664d0a6b42ec80dee7d8cb8e9fec2f4c3ef7e10e45f80ddddf8ba940702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_leavitt, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:05:44 compute-0 systemd[1]: Started libpod-conmon-4967d664d0a6b42ec80dee7d8cb8e9fec2f4c3ef7e10e45f80ddddf8ba940702.scope.
Jan 23 10:05:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:05:44 compute-0 podman[189108]: 2026-01-23 10:05:44.481044983 +0000 UTC m=+0.019783192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:05:44 compute-0 podman[189108]: 2026-01-23 10:05:44.589785825 +0000 UTC m=+0.128524034 container init 4967d664d0a6b42ec80dee7d8cb8e9fec2f4c3ef7e10e45f80ddddf8ba940702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_leavitt, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:05:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:44 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:44 compute-0 podman[189108]: 2026-01-23 10:05:44.599274698 +0000 UTC m=+0.138012887 container start 4967d664d0a6b42ec80dee7d8cb8e9fec2f4c3ef7e10e45f80ddddf8ba940702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:05:44 compute-0 podman[189108]: 2026-01-23 10:05:44.602562537 +0000 UTC m=+0.141300846 container attach 4967d664d0a6b42ec80dee7d8cb8e9fec2f4c3ef7e10e45f80ddddf8ba940702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_leavitt, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:05:44 compute-0 vigilant_leavitt[189193]: 167 167
Jan 23 10:05:44 compute-0 systemd[1]: libpod-4967d664d0a6b42ec80dee7d8cb8e9fec2f4c3ef7e10e45f80ddddf8ba940702.scope: Deactivated successfully.
Jan 23 10:05:44 compute-0 podman[189108]: 2026-01-23 10:05:44.60468707 +0000 UTC m=+0.143425259 container died 4967d664d0a6b42ec80dee7d8cb8e9fec2f4c3ef7e10e45f80ddddf8ba940702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_leavitt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-be004547949892cc09f7da31b29058b11285b2d0bc5054e104b5c1ed3fcafd09-merged.mount: Deactivated successfully.
Jan 23 10:05:44 compute-0 podman[189108]: 2026-01-23 10:05:44.644520901 +0000 UTC m=+0.183259090 container remove 4967d664d0a6b42ec80dee7d8cb8e9fec2f4c3ef7e10e45f80ddddf8ba940702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_leavitt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 10:05:44 compute-0 systemd[1]: libpod-conmon-4967d664d0a6b42ec80dee7d8cb8e9fec2f4c3ef7e10e45f80ddddf8ba940702.scope: Deactivated successfully.
Jan 23 10:05:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:44 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:44 compute-0 podman[189365]: 2026-01-23 10:05:44.823370948 +0000 UTC m=+0.059234112 container create 10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_buck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:05:44 compute-0 systemd[1]: Started libpod-conmon-10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618.scope.
Jan 23 10:05:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163723d2e58a83d500af769df8f10de575775dc794ca68654f58256e68024ef4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163723d2e58a83d500af769df8f10de575775dc794ca68654f58256e68024ef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163723d2e58a83d500af769df8f10de575775dc794ca68654f58256e68024ef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163723d2e58a83d500af769df8f10de575775dc794ca68654f58256e68024ef4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:05:44 compute-0 podman[189365]: 2026-01-23 10:05:44.801602118 +0000 UTC m=+0.037465302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:05:44 compute-0 podman[189365]: 2026-01-23 10:05:44.908874155 +0000 UTC m=+0.144737329 container init 10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_buck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:05:44 compute-0 podman[189365]: 2026-01-23 10:05:44.915504283 +0000 UTC m=+0.151367447 container start 10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:05:44 compute-0 podman[189365]: 2026-01-23 10:05:44.918608626 +0000 UTC m=+0.154471810 container attach 10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:05:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:44.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:45.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:45 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 23 10:05:45 compute-0 sshd[1004]: Received signal 15; terminating.
Jan 23 10:05:45 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 23 10:05:45 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 23 10:05:45 compute-0 systemd[1]: sshd.service: Consumed 2.355s CPU time, read 32.0K from disk, written 4.0K to disk.
Jan 23 10:05:45 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 23 10:05:45 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 23 10:05:45 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 10:05:45 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 10:05:45 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 10:05:45 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 23 10:05:45 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 23 10:05:45 compute-0 sshd[189686]: Server listening on 0.0.0.0 port 22.
Jan 23 10:05:45 compute-0 sshd[189686]: Server listening on :: port 22.
Jan 23 10:05:45 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 23 10:05:45 compute-0 ceph-mon[74335]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:45 compute-0 lvm[189741]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:05:45 compute-0 lvm[189741]: VG ceph_vg0 finished
Jan 23 10:05:45 compute-0 clever_buck[189451]: {}
Jan 23 10:05:45 compute-0 systemd[1]: libpod-10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618.scope: Deactivated successfully.
Jan 23 10:05:45 compute-0 systemd[1]: libpod-10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618.scope: Consumed 1.182s CPU time.
Jan 23 10:05:45 compute-0 podman[189365]: 2026-01-23 10:05:45.683523046 +0000 UTC m=+0.919386230 container died 10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_buck, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-163723d2e58a83d500af769df8f10de575775dc794ca68654f58256e68024ef4-merged.mount: Deactivated successfully.
Jan 23 10:05:45 compute-0 podman[189365]: 2026-01-23 10:05:45.728981995 +0000 UTC m=+0.964845159 container remove 10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 10:05:45 compute-0 systemd[1]: libpod-conmon-10f1352f6e28721cbe3021feebb9f4300e18bf8e3a3af4c4e5cde84b8fabe618.scope: Deactivated successfully.
Jan 23 10:05:45 compute-0 sudo[188823]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:05:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:05:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:45 compute-0 sudo[189789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:05:45 compute-0 sudo[189789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:45 compute-0 sudo[189789]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:46 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:46 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:46 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:46 compute-0 podman[189927]: 2026-01-23 10:05:46.744807157 +0000 UTC m=+0.123339699 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 10:05:46 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:46 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:05:46 compute-0 ceph-mon[74335]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:05:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:05:47.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:05:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 10:05:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 10:05:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:05:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:47.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:05:47 compute-0 systemd[1]: Reloading.
Jan 23 10:05:47 compute-0 systemd-rc-local-generator[190036]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:05:47 compute-0 systemd-sysv-generator[190042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:05:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 10:05:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:05:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:48 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:48 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:48 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:48.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:49.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:49] "GET /metrics HTTP/1.1" 200 48435 "" "Prometheus/2.51.0"
Jan 23 10:05:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:49] "GET /metrics HTTP/1.1" 200 48435 "" "Prometheus/2.51.0"
Jan 23 10:05:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:05:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:05:50 compute-0 ceph-mon[74335]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:05:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:05:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:05:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:05:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:05:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:05:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:05:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:05:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:05:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:50 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4800032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:50 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:50 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:50.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:51.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:51 compute-0 ceph-mon[74335]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:05:51 compute-0 sudo[169657]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:05:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:52 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:52 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4800032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:52 compute-0 sudo[195817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xryzjcbunxxfajnthqleujluhlkapoqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162752.0282338-963-176694381644779/AnsiballZ_systemd.py'
Jan 23 10:05:52 compute-0 sudo[195817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:05:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:52 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:52 compute-0 python3.9[195842]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 10:05:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:05:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:52.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:05:53 compute-0 systemd[1]: Reloading.
Jan 23 10:05:53 compute-0 ceph-mon[74335]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:05:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:53.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:53 compute-0 systemd-rc-local-generator[196254]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:05:53 compute-0 systemd-sysv-generator[196258]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:05:53 compute-0 sudo[195817]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:53 compute-0 sudo[197196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrkteepbgbpfusguxtqahiamvskjbscc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162753.668854-963-263618681338008/AnsiballZ_systemd.py'
Jan 23 10:05:53 compute-0 sudo[197196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:05:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:54 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:54 compute-0 python3.9[197222]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 10:05:54 compute-0 systemd[1]: Reloading.
Jan 23 10:05:54 compute-0 systemd-rc-local-generator[197821]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:05:54 compute-0 systemd-sysv-generator[197826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:05:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:54 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:54 compute-0 sudo[197196]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:54 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4800032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:54.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:55 compute-0 sudo[198633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmnwdhjvskqvabbmpblpsmxekioacixf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162754.811472-963-142449295291991/AnsiballZ_systemd.py'
Jan 23 10:05:55 compute-0 sudo[198633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:05:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:55.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:55 compute-0 ceph-mon[74335]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:55 compute-0 python3.9[198657]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 10:05:55 compute-0 systemd[1]: Reloading.
Jan 23 10:05:55 compute-0 systemd-sysv-generator[198997]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:05:55 compute-0 systemd-rc-local-generator[198994]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:05:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 10:05:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 10:05:55 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.345s CPU time.
Jan 23 10:05:55 compute-0 systemd[1]: run-rcbe9b3536df54904b3e2e28c87d0b37f.service: Deactivated successfully.
Jan 23 10:05:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:05:55 compute-0 sudo[198633]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:05:56 compute-0 sudo[199155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhfxvzjkrysqdbvnuoofjuaonxpjagge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162755.9793584-963-127515477902554/AnsiballZ_systemd.py'
Jan 23 10:05:56 compute-0 sudo[199155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:05:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:56 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:56 compute-0 python3.9[199157]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 10:05:56 compute-0 systemd[1]: Reloading.
Jan 23 10:05:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:56 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:56 compute-0 systemd-rc-local-generator[199185]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:05:56 compute-0 systemd-sysv-generator[199189]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:05:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:56 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:56 compute-0 sudo[199155]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:56.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:05:57.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:05:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:57 compute-0 ceph-mon[74335]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:05:57 compute-0 sudo[199345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxcatpsrhhohhzccdqkshbgmzypazgpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162757.0953476-1050-53867964350627/AnsiballZ_systemd.py'
Jan 23 10:05:57 compute-0 sudo[199345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:05:57 compute-0 python3.9[199347]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:05:57 compute-0 systemd[1]: Reloading.
Jan 23 10:05:57 compute-0 systemd-rc-local-generator[199378]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:05:57 compute-0 systemd-sysv-generator[199381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:05:58 compute-0 sudo[199345]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:58 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:58 compute-0 sudo[199536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czpfwwoaktmvevemjtnhqnxwdrtxifyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162758.2053406-1050-85694783773949/AnsiballZ_systemd.py'
Jan 23 10:05:58 compute-0 sudo[199536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:05:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:58 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:05:58 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:05:58 compute-0 python3.9[199538]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:05:58 compute-0 systemd[1]: Reloading.
Jan 23 10:05:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:58.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:58 compute-0 systemd-sysv-generator[199572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:05:58 compute-0 systemd-rc-local-generator[199568]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:05:59 compute-0 sudo[199536]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:05:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:05:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:05:59 compute-0 ceph-mon[74335]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:05:59 compute-0 sudo[199726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbjruzkzyghycbvfmgkxtgkrkvvfzvmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162759.3203976-1050-198782388611701/AnsiballZ_systemd.py'
Jan 23 10:05:59 compute-0 sudo[199726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:05:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:05:59.756 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:05:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:05:59.758 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:05:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:05:59.759 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:05:59 compute-0 sudo[199729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:05:59 compute-0 sudo[199729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:05:59 compute-0 sudo[199729]: pam_unix(sudo:session): session closed for user root
Jan 23 10:05:59 compute-0 python3.9[199728]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:05:59 compute-0 systemd[1]: Reloading.
Jan 23 10:05:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:59] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:05:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:05:59] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:06:00 compute-0 systemd-sysv-generator[199784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:06:00 compute-0 systemd-rc-local-generator[199781]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:06:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:00 compute-0 sudo[199726]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:00 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:00 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:00 compute-0 sudo[199943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsesmhshxsakznsewugxglkxlncpqkna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162760.433815-1050-9707199983476/AnsiballZ_systemd.py'
Jan 23 10:06:00 compute-0 sudo[199943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:00 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:00.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:01 compute-0 python3.9[199945]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:01 compute-0 sudo[199943]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:06:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:01.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:06:01 compute-0 ceph-mon[74335]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:01 compute-0 sudo[200098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxizmroczplbrilfvozltnzkstzvdgjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162761.2213051-1050-262408565161215/AnsiballZ_systemd.py'
Jan 23 10:06:01 compute-0 sudo[200098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:01 compute-0 python3.9[200100]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:01 compute-0 systemd[1]: Reloading.
Jan 23 10:06:02 compute-0 podman[200103]: 2026-01-23 10:06:02.046613584 +0000 UTC m=+0.179461410 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 23 10:06:02 compute-0 systemd-rc-local-generator[200149]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:06:02 compute-0 systemd-sysv-generator[200153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:06:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:06:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:02 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:02 compute-0 sudo[200098]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:02 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:02 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:02.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:03 compute-0 sudo[200309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqwzbnfocqggewcjxudttqzjacqttyjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162762.7872937-1158-30717389238754/AnsiballZ_systemd.py'
Jan 23 10:06:03 compute-0 sudo[200309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:03.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:03 compute-0 python3.9[200311]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 10:06:03 compute-0 systemd[1]: Reloading.
Jan 23 10:06:03 compute-0 systemd-rc-local-generator[200343]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:06:03 compute-0 systemd-sysv-generator[200346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:06:03 compute-0 ceph-mon[74335]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:06:03 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 23 10:06:03 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 23 10:06:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100603 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:06:03 compute-0 sudo[200309]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:04 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:04 compute-0 sudo[200505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzukaewbcwwcsbainneycujoafmhyagr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162764.0476675-1182-120718160348208/AnsiballZ_systemd.py'
Jan 23 10:06:04 compute-0 sudo[200505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:04 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:04 compute-0 python3.9[200507]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:04 compute-0 sudo[200505]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:04 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:04.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:06:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:06:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:05.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:05 compute-0 ceph-mon[74335]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:06:05 compute-0 sudo[200660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpoxzzjpuswuyhiemrbywmsfokjiytrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162764.8814323-1182-261204150287867/AnsiballZ_systemd.py'
Jan 23 10:06:05 compute-0 sudo[200660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:05 compute-0 python3.9[200662]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:06 compute-0 sudo[200660]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:06:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:06 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:06 compute-0 sudo[200817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gutsvdxtbqndiheopoolfxhqqhjtiucg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162766.2360225-1182-194839249973496/AnsiballZ_systemd.py'
Jan 23 10:06:06 compute-0 sudo[200817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:06 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:06 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:06 compute-0 python3.9[200819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:06 compute-0 sudo[200817]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:06.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:06:07.002Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:06:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:06:07.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:06:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:07.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:07 compute-0 sudo[200972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xilzjwdczgflxjcgzglbcipycbklozen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162766.9941528-1182-36513530792801/AnsiballZ_systemd.py'
Jan 23 10:06:07 compute-0 sudo[200972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:07 compute-0 ceph-mon[74335]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:06:07 compute-0 python3.9[200974]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:07 compute-0 sudo[200972]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:08 compute-0 sudo[201128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjruupylkjnyljekouhpbypsldnaonqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162767.8180912-1182-103507623391468/AnsiballZ_systemd.py'
Jan 23 10:06:08 compute-0 sudo[201128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:06:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:08 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:08 compute-0 python3.9[201130]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:08 compute-0 sudo[201128]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:08 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:08 compute-0 ceph-mon[74335]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:06:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:08 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:08 compute-0 sudo[201284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scuxacjmxnzskztfzimzcflzrycnawpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162768.5744743-1182-247343850983265/AnsiballZ_systemd.py'
Jan 23 10:06:08 compute-0 sudo[201284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:08.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:09 compute-0 python3.9[201286]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:09 compute-0 sudo[201284]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:09.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:09 compute-0 sudo[201439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqxlhxfidjzkjnymhrnvmdfgsnrjgklk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162769.3573308-1182-142816353407393/AnsiballZ_systemd.py'
Jan 23 10:06:09 compute-0 sudo[201439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:09 compute-0 python3.9[201441]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:09] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:06:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:09] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:06:10 compute-0 sudo[201439]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:06:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:10 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:10 compute-0 sudo[201599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxwnzgkfzbxfvxvvkwkgnvuscgpxchtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162770.119216-1182-145119542931106/AnsiballZ_systemd.py'
Jan 23 10:06:10 compute-0 sudo[201599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:10 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:10 compute-0 python3.9[201601]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:10 compute-0 sudo[201599]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:10 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:10.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:11 compute-0 sudo[201754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpmwaibiwamrhbfrqsmsvbxzoacygfnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162770.8570032-1182-217068112312045/AnsiballZ_systemd.py'
Jan 23 10:06:11 compute-0 sudo[201754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:11.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:11 compute-0 python3.9[201756]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:11 compute-0 sudo[201754]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:11 compute-0 ceph-mon[74335]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:06:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:11 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:06:11 compute-0 sudo[201910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsltmdgjvfwviywtggctpelapgnivzmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162771.5931447-1182-108642322387526/AnsiballZ_systemd.py'
Jan 23 10:06:11 compute-0 sudo[201910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:12 compute-0 python3.9[201912]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:12 compute-0 sudo[201910]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:12 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc478000bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:12 compute-0 sudo[202066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sooqqpubksvwrntoycrrthhmtdxgdxgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162772.348312-1182-66446551313034/AnsiballZ_systemd.py'
Jan 23 10:06:12 compute-0 sudo[202066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:12 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:12 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:12 compute-0 python3.9[202068]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:12 compute-0 sudo[202066]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:12.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:13 compute-0 ceph-mon[74335]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:13.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:13 compute-0 sudo[202221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcjgeryqboqcxgxzcspozpxbieltphtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162773.0870824-1182-70749749133810/AnsiballZ_systemd.py'
Jan 23 10:06:13 compute-0 sudo[202221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:13 compute-0 python3.9[202223]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:13 compute-0 sudo[202221]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:14 compute-0 sudo[202377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flnvxvbopavpwpqwcvzujumooiczyzky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162773.8215795-1182-121463181353291/AnsiballZ_systemd.py'
Jan 23 10:06:14 compute-0 sudo[202377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:14 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:14 compute-0 python3.9[202379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:14 compute-0 sudo[202377]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:14 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc478001710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:14 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:14 compute-0 ceph-mon[74335]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:14 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:06:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:14 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:06:14 compute-0 sudo[202533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prvbhlsqypvtrgwynzyrkqweocqbmqfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162774.5820203-1182-204038866562640/AnsiballZ_systemd.py'
Jan 23 10:06:14 compute-0 sudo[202533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:14.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:15 compute-0 python3.9[202535]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 10:06:15 compute-0 sudo[202533]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:15.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:06:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:16 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:16 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:16 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc478001710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:16.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:06:17.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:06:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:17.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:17 compute-0 ceph-mon[74335]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:06:17 compute-0 podman[202565]: 2026-01-23 10:06:17.59704904 +0000 UTC m=+0.128211160 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 23 10:06:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:17 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:06:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:06:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:18 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:18 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:18 compute-0 sudo[202720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwvhupeecymsewdrfhvollbxfxehrrpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162778.3918676-1488-222928699368348/AnsiballZ_file.py'
Jan 23 10:06:18 compute-0 sudo[202720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:18 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:18 compute-0 python3.9[202722]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:06:18 compute-0 sudo[202720]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:18.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:19.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:19 compute-0 sudo[202872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebdotmprnumesljpogvwzphgohxxsmkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162779.0342207-1488-121055282382978/AnsiballZ_file.py'
Jan 23 10:06:19 compute-0 sudo[202872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:19 compute-0 ceph-mon[74335]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:06:19 compute-0 python3.9[202874]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:06:19 compute-0 sudo[202872]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:19 compute-0 sudo[202975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:06:19 compute-0 sudo[202975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:19 compute-0 sudo[202975]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:19 compute-0 sudo[203050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iawjsklrsevalluylfiymnylzcjzrrtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162779.6790075-1488-65022571545268/AnsiballZ_file.py'
Jan 23 10:06:19 compute-0 sudo[203050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:06:19
Jan 23 10:06:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:06:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:06:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['default.rgw.meta', '.nfs', 'vms', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups']
Jan 23 10:06:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:06:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:19] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:06:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:19] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:06:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:06:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:06:20 compute-0 python3.9[203052]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:06:20 compute-0 sudo[203050]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:06:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:06:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:20 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc478001710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:20 compute-0 sudo[203203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvnzmyfbsrxbetjromuzrmihlekddwat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162780.2552345-1488-21686655582661/AnsiballZ_file.py'
Jan 23 10:06:20 compute-0 sudo[203203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:06:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:20 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:20 compute-0 python3.9[203205]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:06:20 compute-0 sudo[203203]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:20 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:20.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:21 compute-0 sudo[203355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltmncftplofffzwpsyipvxwkwzhezivi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162780.8940108-1488-165881704540310/AnsiballZ_file.py'
Jan 23 10:06:21 compute-0 sudo[203355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:21.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:21 compute-0 python3.9[203357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:06:21 compute-0 sudo[203355]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:21 compute-0 ceph-mon[74335]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:06:21 compute-0 auditd[699]: Audit daemon rotating log files
Jan 23 10:06:21 compute-0 sudo[203508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmsqzuoiqszepwsecxljksjxyquuvjwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162781.6021333-1488-264921538663584/AnsiballZ_file.py'
Jan 23 10:06:21 compute-0 sudo[203508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:22 compute-0 python3.9[203510]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:06:22 compute-0 sudo[203508]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:06:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:22 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:22 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc478002ba0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:22 compute-0 ceph-mon[74335]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:06:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:22 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:22.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:23 compute-0 python3.9[203661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:06:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:23.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100623 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:06:23 compute-0 sudo[203812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwvbtexbqekiiorcqnkrxifowludmklb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162783.3683176-1641-184108390210491/AnsiballZ_stat.py'
Jan 23 10:06:23 compute-0 sudo[203812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:24 compute-0 python3.9[203814]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:24 compute-0 sudo[203812]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:06:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:24 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:24 compute-0 sudo[203938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iitfxkcqtwcjbfapzlioaeuhctkmadaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162783.3683176-1641-184108390210491/AnsiballZ_copy.py'
Jan 23 10:06:24 compute-0 sudo[203938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:24 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:24 compute-0 python3.9[203940]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769162783.3683176-1641-184108390210491/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:24 compute-0 sudo[203938]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:24 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:24.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:25 compute-0 sudo[204090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrrnflhvazmukugivpsbybvsvufetxej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162784.8688092-1641-138553778693346/AnsiballZ_stat.py'
Jan 23 10:06:25 compute-0 sudo[204090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:25.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:25 compute-0 python3.9[204092]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:25 compute-0 sudo[204090]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:25 compute-0 ceph-mon[74335]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:06:25 compute-0 sudo[204216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpteopbmzvhovuoliaxjzmetsqqdkcid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162784.8688092-1641-138553778693346/AnsiballZ_copy.py'
Jan 23 10:06:25 compute-0 sudo[204216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:25.877326) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162785877785, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3815, "num_deletes": 502, "total_data_size": 7840309, "memory_usage": 7958344, "flush_reason": "Manual Compaction"}
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162785952581, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4407138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13266, "largest_seqno": 17080, "table_properties": {"data_size": 4395820, "index_size": 6404, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3845, "raw_key_size": 30531, "raw_average_key_size": 19, "raw_value_size": 4368991, "raw_average_value_size": 2857, "num_data_blocks": 279, "num_entries": 1529, "num_filter_entries": 1529, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162371, "oldest_key_time": 1769162371, "file_creation_time": 1769162785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 75571 microseconds, and 12992 cpu microseconds.
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:25.953005) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4407138 bytes OK
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:25.953158) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:25.956483) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:25.956514) EVENT_LOG_v1 {"time_micros": 1769162785956508, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:25.956543) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7824778, prev total WAL file size 7824778, number of live WAL files 2.
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:25.959673) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(4303KB)], [32(12MB)]
Jan 23 10:06:25 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162785959813, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 17630648, "oldest_snapshot_seqno": -1}
Jan 23 10:06:26 compute-0 python3.9[204218]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769162784.8688092-1641-138553778693346/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:26 compute-0 sudo[204216]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4978 keys, 13174108 bytes, temperature: kUnknown
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162786088327, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 13174108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13139041, "index_size": 21517, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 124796, "raw_average_key_size": 25, "raw_value_size": 13046833, "raw_average_value_size": 2620, "num_data_blocks": 899, "num_entries": 4978, "num_filter_entries": 4978, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769162785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:26.088618) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 13174108 bytes
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:26.089928) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.1 rd, 102.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.2, 12.6 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(7.0) write-amplify(3.0) OK, records in: 5806, records dropped: 828 output_compression: NoCompression
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:26.089954) EVENT_LOG_v1 {"time_micros": 1769162786089938, "job": 14, "event": "compaction_finished", "compaction_time_micros": 128603, "compaction_time_cpu_micros": 34835, "output_level": 6, "num_output_files": 1, "total_output_size": 13174108, "num_input_records": 5806, "num_output_records": 4978, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162786090781, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162786092953, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:25.959508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:26.093000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:26.093005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:26.093006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:26.093008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:06:26 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:06:26.093010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:06:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:06:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:26 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:26 compute-0 sudo[204369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqtjggcoplgabophxjhesqwrsmzlwavt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162786.1850514-1641-220095256099422/AnsiballZ_stat.py'
Jan 23 10:06:26 compute-0 sudo[204369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:26 compute-0 python3.9[204371]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:26 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:26 compute-0 sudo[204369]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:26 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:26 compute-0 ceph-mon[74335]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:06:26 compute-0 sudo[204494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cubbarktjzcdyblxalaafomljiwoozdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162786.1850514-1641-220095256099422/AnsiballZ_copy.py'
Jan 23 10:06:26 compute-0 sudo[204494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:26.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:06:27.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:06:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:06:27.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:06:27 compute-0 python3.9[204496]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769162786.1850514-1641-220095256099422/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:27 compute-0 sudo[204494]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:27.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:27 compute-0 sudo[204646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdoypdlvpyigijleiusymtphpylovzhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162787.240598-1641-69210301332907/AnsiballZ_stat.py'
Jan 23 10:06:27 compute-0 sudo[204646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:27 compute-0 python3.9[204648]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:27 compute-0 sudo[204646]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:28 compute-0 sudo[204772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfwvszmlxkkqgdbkliiqbgdaxttsbbun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162787.240598-1641-69210301332907/AnsiballZ_copy.py'
Jan 23 10:06:28 compute-0 sudo[204772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:28 compute-0 python3.9[204775]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769162787.240598-1641-69210301332907/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:28 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4780034c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:28 compute-0 sudo[204772]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:28 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:28 compute-0 sudo[204925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kozxyphctyxsbmgjbayfwduoqnpczytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162788.4484224-1641-122107980866026/AnsiballZ_stat.py'
Jan 23 10:06:28 compute-0 sudo[204925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:28 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:28 compute-0 python3.9[204927]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:28 compute-0 sudo[204925]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:28.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:29 compute-0 sudo[205050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvgadaoppebrftaajrwpwshksnjktwud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162788.4484224-1641-122107980866026/AnsiballZ_copy.py'
Jan 23 10:06:29 compute-0 sudo[205050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:29.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:29 compute-0 ceph-mon[74335]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:29 compute-0 python3.9[205052]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769162788.4484224-1641-122107980866026/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:29 compute-0 sudo[205050]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:29] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:06:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:29] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:06:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:30 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:30 compute-0 sudo[205204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nckjefmxztlxlcbxgxxoerpkvfpahknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162789.6973712-1641-99238990616106/AnsiballZ_stat.py'
Jan 23 10:06:30 compute-0 sudo[205204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:30 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4780034c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:30 compute-0 python3.9[205206]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:30 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:30 compute-0 sudo[205204]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:30.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:31 compute-0 sudo[205329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjxjcctordlrmmluyxrbsmbdxahiuoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162789.6973712-1641-99238990616106/AnsiballZ_copy.py'
Jan 23 10:06:31 compute-0 sudo[205329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:31.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:31 compute-0 ceph-mon[74335]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:31 compute-0 python3.9[205331]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769162789.6973712-1641-99238990616106/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:31 compute-0 sudo[205329]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:31 compute-0 sudo[205481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puctrauqnmxhowlwaimxjghddcofmmie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162791.492005-1641-267820814472815/AnsiballZ_stat.py'
Jan 23 10:06:31 compute-0 sudo[205481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:31 compute-0 python3.9[205483]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:31 compute-0 sudo[205481]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:32 compute-0 sudo[205606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cavpqwchpxpcwlbzfyymouijvifzsjkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162791.492005-1641-267820814472815/AnsiballZ_copy.py'
Jan 23 10:06:32 compute-0 sudo[205606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:32 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:32 compute-0 python3.9[205608]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769162791.492005-1641-267820814472815/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:32 compute-0 sudo[205606]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:32 compute-0 podman[205609]: 2026-01-23 10:06:32.551322618 +0000 UTC m=+0.083290993 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 10:06:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:32 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:32 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4780034c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:32 compute-0 sudo[205777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kphmndwfuzpdzpjvctulufsepxwdttxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162792.5989475-1641-96386053671778/AnsiballZ_stat.py'
Jan 23 10:06:32 compute-0 sudo[205777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:32.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:33 compute-0 python3.9[205779]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:33 compute-0 sudo[205777]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:33.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:33 compute-0 sudo[205902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phodtqjizforfkvjsykosqboqymfswpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162792.5989475-1641-96386053671778/AnsiballZ_copy.py'
Jan 23 10:06:33 compute-0 sudo[205902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:33 compute-0 ceph-mon[74335]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:06:33 compute-0 python3.9[205904]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769162792.5989475-1641-96386053671778/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:33 compute-0 sudo[205902]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:34 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:34 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:34 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:34 compute-0 sudo[206056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcabvyjgumkrkdivyysgeqxlxgdfgubp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162794.4200463-1980-142494333015268/AnsiballZ_command.py'
Jan 23 10:06:34 compute-0 sudo[206056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:34.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:06:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:06:35 compute-0 python3.9[206058]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 23 10:06:35 compute-0 sudo[206056]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:35.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:35 compute-0 ceph-mon[74335]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:06:35 compute-0 sudo[206210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpmzsdjbxemcleyfsklfzmwxaxkdcpgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162795.5700598-2007-84447098937143/AnsiballZ_file.py'
Jan 23 10:06:35 compute-0 sudo[206210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:36 compute-0 python3.9[206212]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:36 compute-0 sudo[206210]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:36 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4780034c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:36 compute-0 sudo[206363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfoclyooyhxvdylajhxmoxyialghzrin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162796.225121-2007-255784580596071/AnsiballZ_file.py'
Jan 23 10:06:36 compute-0 sudo[206363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:36 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:36 compute-0 python3.9[206365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:36 compute-0 sudo[206363]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:36 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:36.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:06:37.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:06:37 compute-0 sudo[206515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpomnqtlysclwkajfsstmlumygnalmhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162796.8216202-2007-125624382516967/AnsiballZ_file.py'
Jan 23 10:06:37 compute-0 sudo[206515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:37 compute-0 python3.9[206517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:37 compute-0 sudo[206515]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:37 compute-0 sudo[206667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyjctpecxrofbeburogolrlbkpujtrbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162797.4209905-2007-186202863231498/AnsiballZ_file.py'
Jan 23 10:06:37 compute-0 sudo[206667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:37 compute-0 ceph-mon[74335]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:37 compute-0 python3.9[206669]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:37 compute-0 sudo[206667]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:38 compute-0 sudo[206821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hukidcfqeommvoxmnlbdznzmcscdwxwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162798.0307288-2007-83091822524282/AnsiballZ_file.py'
Jan 23 10:06:38 compute-0 sudo[206821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:38 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:38 compute-0 python3.9[206823]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:38 compute-0 sudo[206821]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:38 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4780034c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:38 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:38 compute-0 ceph-mon[74335]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:38.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:39 compute-0 sudo[206973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyiqudgkrwfgwupevpxosygquywydioa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162798.635918-2007-24639096893167/AnsiballZ_file.py'
Jan 23 10:06:39 compute-0 sudo[206973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:39 compute-0 python3.9[206975]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:39.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:39 compute-0 sudo[206973]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:39 compute-0 sudo[207125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mozsmtvqcdbyigrpnzwfxqzvvojgidvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162799.4441674-2007-93397459275718/AnsiballZ_file.py'
Jan 23 10:06:39 compute-0 sudo[207125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:39 compute-0 python3.9[207128]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:39 compute-0 sudo[207129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:06:39 compute-0 sudo[207129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:39 compute-0 sudo[207129]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:39 compute-0 sudo[207125]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:39] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:06:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:39] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:06:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:40 compute-0 sudo[207304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiufgjzdhoxqylqglucoigvybwnyxaah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162800.0872753-2007-121244743295235/AnsiballZ_file.py'
Jan 23 10:06:40 compute-0 sudo[207304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:40 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:40 compute-0 python3.9[207306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:40 compute-0 sudo[207304]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:40 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc46c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:40 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc4780034c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:40 compute-0 sudo[207456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esfsccxinvfdztvzxutxkizsfxnjvctf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162800.6749008-2007-3414751751779/AnsiballZ_file.py'
Jan 23 10:06:40 compute-0 sudo[207456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:40.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:41 compute-0 python3.9[207458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:41 compute-0 sudo[207456]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:41.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:41 compute-0 ceph-mon[74335]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:41 compute-0 sudo[207608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgxwztjvknlbigxlflgsmrbanbljucod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162801.3435528-2007-59903071837955/AnsiballZ_file.py'
Jan 23 10:06:41 compute-0 sudo[207608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:41 compute-0 python3.9[207610]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:41 compute-0 sudo[207608]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:42 compute-0 sudo[207762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pchfwyscsjttydvrrsljfzaovrtqwryd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162801.9482179-2007-147334656836160/AnsiballZ_file.py'
Jan 23 10:06:42 compute-0 sudo[207762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:06:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:42 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:42 compute-0 python3.9[207764]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:42 compute-0 sudo[207762]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:42 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:42 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:42.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:43 compute-0 sudo[207917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcidpccqzlfrdqbnuubqibscxfbnlguq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162802.5723858-2007-158286795730512/AnsiballZ_file.py'
Jan 23 10:06:43 compute-0 sudo[207917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:43.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:43 compute-0 python3.9[207919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:43 compute-0 sudo[207917]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:43 compute-0 ceph-mon[74335]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:06:43 compute-0 sudo[208069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fctthsylrvmpflcqmerllcodbfncyqsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162803.503026-2007-100383546300177/AnsiballZ_file.py'
Jan 23 10:06:43 compute-0 sudo[208069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:43 compute-0 python3.9[208072]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:43 compute-0 sudo[208069]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:44 compute-0 sudo[208223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omlyzpjgrnsjibszrpxikjqpmbmkwbea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162804.0664923-2007-118627652353320/AnsiballZ_file.py'
Jan 23 10:06:44 compute-0 sudo[208223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:44 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc474001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:44 compute-0 python3.9[208225]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:44 compute-0 sudo[208223]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:44 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc49c0025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:44 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc47c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:06:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:44.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:45.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:45 compute-0 ceph-mon[74335]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:45 compute-0 sudo[208375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwbwlzmgokurjiygsgrpiifaocjeifhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162805.2690969-2304-251225896168942/AnsiballZ_stat.py'
Jan 23 10:06:45 compute-0 sudo[208375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:45 compute-0 python3.9[208377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:45 compute-0 sudo[208375]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:46 compute-0 sudo[208499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkipywnudxzcjbydvnuslzcvugbzfoiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162805.2690969-2304-251225896168942/AnsiballZ_copy.py'
Jan 23 10:06:46 compute-0 sudo[208499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:46 compute-0 sudo[208502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:06:46 compute-0 sudo[208502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:46 compute-0 sudo[208502]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:46 compute-0 sudo[208528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 10:06:46 compute-0 sudo[208528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:46 compute-0 python3.9[208501]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162805.2690969-2304-251225896168942/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:46 compute-0 sudo[208499]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:46 compute-0 kernel: ganesha.nfsd[207767]: segfault at 50 ip 00007fc521f3a32e sp 00007fc48a7fb210 error 4 in libntirpc.so.5.8[7fc521f1f000+2c000] likely on CPU 7 (core 0, socket 7)
Jan 23 10:06:46 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 10:06:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[188000]: 23/01/2026 10:06:46 : epoch 697347e3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc480000f30 fd 48 proxy ignored for local
Jan 23 10:06:46 compute-0 systemd[1]: Started Process Core Dump (PID 208559/UID 0).
Jan 23 10:06:46 compute-0 sudo[208771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckxveeokyksdmuetiarruodkhgneglvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162806.488913-2304-212940267527508/AnsiballZ_stat.py'
Jan 23 10:06:46 compute-0 sudo[208771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:46 compute-0 podman[208770]: 2026-01-23 10:06:46.914070373 +0000 UTC m=+0.124191883 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:06:46 compute-0 python3.9[208779]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:46.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:06:47.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:06:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:06:47.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:06:47 compute-0 sudo[208771]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:47 compute-0 podman[208770]: 2026-01-23 10:06:47.037967026 +0000 UTC m=+0.248088536 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 10:06:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:47.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:47 compute-0 sudo[208982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wezqeguzqmoapqhvuejkdupsirzceenr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162806.488913-2304-212940267527508/AnsiballZ_copy.py'
Jan 23 10:06:47 compute-0 sudo[208982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:47 compute-0 ceph-mon[74335]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:47 compute-0 podman[209014]: 2026-01-23 10:06:47.562310706 +0000 UTC m=+0.089385631 container exec 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:06:47 compute-0 python3.9[208987]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162806.488913-2304-212940267527508/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:47 compute-0 podman[209014]: 2026-01-23 10:06:47.602844545 +0000 UTC m=+0.129919440 container exec_died 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:06:47 compute-0 sudo[208982]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:47 compute-0 sshd-session[208865]: Invalid user solana from 80.94.92.168 port 37836
Jan 23 10:06:47 compute-0 podman[209050]: 2026-01-23 10:06:47.776096864 +0000 UTC m=+0.102626796 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 10:06:47 compute-0 sshd-session[208865]: Connection closed by invalid user solana 80.94.92.168 port 37836 [preauth]
Jan 23 10:06:47 compute-0 systemd-coredump[208577]: Process 188004 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 60:
                                                    #0  0x00007fc521f3a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 10:06:47 compute-0 podman[209224]: 2026-01-23 10:06:47.988025437 +0000 UTC m=+0.068628357 container exec 2b6760de5d44f38fc3fdf786d207211c3e20052bb951e0c67148ef3069fe3e6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 10:06:48 compute-0 podman[209224]: 2026-01-23 10:06:48.002717985 +0000 UTC m=+0.083320875 container exec_died 2b6760de5d44f38fc3fdf786d207211c3e20052bb951e0c67148ef3069fe3e6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:06:48 compute-0 systemd[1]: systemd-coredump@5-208559-0.service: Deactivated successfully.
Jan 23 10:06:48 compute-0 systemd[1]: systemd-coredump@5-208559-0.service: Consumed 1.521s CPU time.
Jan 23 10:06:48 compute-0 podman[209280]: 2026-01-23 10:06:48.048201227 +0000 UTC m=+0.032785544 container died 2b6760de5d44f38fc3fdf786d207211c3e20052bb951e0c67148ef3069fe3e6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:06:48 compute-0 sudo[209306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unzttdscgbxdplklslrrmmvhnpwcovza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162807.77095-2304-270066132892378/AnsiballZ_stat.py'
Jan 23 10:06:48 compute-0 sudo[209306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7727bf51034fb5175fcd75d062f1bc899f0e220520c765e285e3ae9d73af6bf-merged.mount: Deactivated successfully.
Jan 23 10:06:48 compute-0 podman[209300]: 2026-01-23 10:06:48.14525387 +0000 UTC m=+0.115212232 container remove 2b6760de5d44f38fc3fdf786d207211c3e20052bb951e0c67148ef3069fe3e6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:06:48 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 10:06:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:48 compute-0 python3.9[209325]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:48 compute-0 sudo[209306]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:48 compute-0 podman[209364]: 2026-01-23 10:06:48.300052722 +0000 UTC m=+0.076904548 container exec 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 10:06:48 compute-0 podman[209364]: 2026-01-23 10:06:48.320673652 +0000 UTC m=+0.097525458 container exec_died 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 10:06:48 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:06:48 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.915s CPU time.
Jan 23 10:06:48 compute-0 podman[209497]: 2026-01-23 10:06:48.535392577 +0000 UTC m=+0.055441604 container exec 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Jan 23 10:06:48 compute-0 podman[209497]: 2026-01-23 10:06:48.553909635 +0000 UTC m=+0.073958642 container exec_died 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, name=keepalived, version=2.2.4, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9)
Jan 23 10:06:48 compute-0 sudo[209607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnapvpfrwmgaegchaysodmpvcecloouk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162807.77095-2304-270066132892378/AnsiballZ_copy.py'
Jan 23 10:06:48 compute-0 sudo[209607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:48 compute-0 podman[209635]: 2026-01-23 10:06:48.768979729 +0000 UTC m=+0.056895306 container exec a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:06:48 compute-0 python3.9[209617]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162807.77095-2304-270066132892378/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:48 compute-0 podman[209635]: 2026-01-23 10:06:48.828795099 +0000 UTC m=+0.116710646 container exec_died a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:06:48 compute-0 sudo[209607]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:49.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:49 compute-0 podman[209757]: 2026-01-23 10:06:49.047590562 +0000 UTC m=+0.058382549 container exec 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 10:06:49 compute-0 sudo[209889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwzvaliybxbgfaibtsapmapipnibizjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162808.965326-2304-29544674180470/AnsiballZ_stat.py'
Jan 23 10:06:49 compute-0 sudo[209889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:49 compute-0 podman[209757]: 2026-01-23 10:06:49.263017448 +0000 UTC m=+0.273809435 container exec_died 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 10:06:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:49.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:49 compute-0 python3.9[209891]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:49 compute-0 sudo[209889]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:49 compute-0 ceph-mon[74335]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:49 compute-0 podman[210021]: 2026-01-23 10:06:49.641948998 +0000 UTC m=+0.056492074 container exec 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:06:49 compute-0 podman[210021]: 2026-01-23 10:06:49.715826147 +0000 UTC m=+0.130369233 container exec_died 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:06:49 compute-0 sudo[208528]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:06:49 compute-0 sudo[210134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okxrucastjqndfnyfveehgobaztaxgmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162808.965326-2304-29544674180470/AnsiballZ_copy.py'
Jan 23 10:06:49 compute-0 sudo[210134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:06:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:49 compute-0 sudo[210137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:06:49 compute-0 sudo[210137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:49 compute-0 sudo[210137]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:49] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:06:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:49] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:06:49 compute-0 sudo[210162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:06:49 compute-0 sudo[210162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:50 compute-0 python3.9[210136]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162808.965326-2304-29544674180470/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:06:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:06:50 compute-0 sudo[210134]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:06:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:06:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:06:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:06:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:06:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:06:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100650 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:06:50 compute-0 sudo[210367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvgdhomhrqqkulpqatejubzsmmdevvrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162810.1936488-2304-127902026182353/AnsiballZ_stat.py'
Jan 23 10:06:50 compute-0 sudo[210367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:50 compute-0 sudo[210162]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:06:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:06:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:06:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:06:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:06:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:06:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:06:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:06:50 compute-0 sudo[210370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:06:50 compute-0 sudo[210370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:50 compute-0 sudo[210370]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:50 compute-0 python3.9[210369]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:50 compute-0 sudo[210367]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:50 compute-0 sudo[210395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:06:50 compute-0 sudo[210395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:06:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:51 compute-0 sudo[210578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmyjxvnqqqnmdjtevriboukvisrbhbiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162810.1936488-2304-127902026182353/AnsiballZ_copy.py'
Jan 23 10:06:51 compute-0 sudo[210578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:51 compute-0 podman[210584]: 2026-01-23 10:06:51.145851137 +0000 UTC m=+0.043074524 container create 2b399d5fbf0cf10d62042935c10b41b5bb854af7b183ad3f18bbd2635e6fb8a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:06:51 compute-0 systemd[1]: Started libpod-conmon-2b399d5fbf0cf10d62042935c10b41b5bb854af7b183ad3f18bbd2635e6fb8a0.scope.
Jan 23 10:06:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:06:51 compute-0 podman[210584]: 2026-01-23 10:06:51.123671002 +0000 UTC m=+0.020894409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:06:51 compute-0 podman[210584]: 2026-01-23 10:06:51.226439981 +0000 UTC m=+0.123663388 container init 2b399d5fbf0cf10d62042935c10b41b5bb854af7b183ad3f18bbd2635e6fb8a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_colden, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 10:06:51 compute-0 podman[210584]: 2026-01-23 10:06:51.236171164 +0000 UTC m=+0.133394541 container start 2b399d5fbf0cf10d62042935c10b41b5bb854af7b183ad3f18bbd2635e6fb8a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 10:06:51 compute-0 angry_colden[210600]: 167 167
Jan 23 10:06:51 compute-0 systemd[1]: libpod-2b399d5fbf0cf10d62042935c10b41b5bb854af7b183ad3f18bbd2635e6fb8a0.scope: Deactivated successfully.
Jan 23 10:06:51 compute-0 podman[210584]: 2026-01-23 10:06:51.244814725 +0000 UTC m=+0.142038142 container attach 2b399d5fbf0cf10d62042935c10b41b5bb854af7b183ad3f18bbd2635e6fb8a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_colden, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 10:06:51 compute-0 podman[210584]: 2026-01-23 10:06:51.24532687 +0000 UTC m=+0.142550247 container died 2b399d5fbf0cf10d62042935c10b41b5bb854af7b183ad3f18bbd2635e6fb8a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 10:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6eba0babea119948268c8ae353a1f8be03adcab5ace75cbcd081bdadfa6df8b-merged.mount: Deactivated successfully.
Jan 23 10:06:51 compute-0 python3.9[210583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162810.1936488-2304-127902026182353/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:51 compute-0 podman[210584]: 2026-01-23 10:06:51.290988758 +0000 UTC m=+0.188212135 container remove 2b399d5fbf0cf10d62042935c10b41b5bb854af7b183ad3f18bbd2635e6fb8a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_colden, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 10:06:51 compute-0 systemd[1]: libpod-conmon-2b399d5fbf0cf10d62042935c10b41b5bb854af7b183ad3f18bbd2635e6fb8a0.scope: Deactivated successfully.
Jan 23 10:06:51 compute-0 sudo[210578]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:51.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:51 compute-0 podman[210648]: 2026-01-23 10:06:51.456224024 +0000 UTC m=+0.048358038 container create 34619c21d036c3acf2e6e05556b82d195e59b40b397c5dc7c988e9964293d226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Jan 23 10:06:51 compute-0 systemd[1]: Started libpod-conmon-34619c21d036c3acf2e6e05556b82d195e59b40b397c5dc7c988e9964293d226.scope.
Jan 23 10:06:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c90d9319be07a113e8541060be421281ac381cf0530a30f1b62e878f13ce8c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c90d9319be07a113e8541060be421281ac381cf0530a30f1b62e878f13ce8c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c90d9319be07a113e8541060be421281ac381cf0530a30f1b62e878f13ce8c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c90d9319be07a113e8541060be421281ac381cf0530a30f1b62e878f13ce8c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c90d9319be07a113e8541060be421281ac381cf0530a30f1b62e878f13ce8c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:51 compute-0 podman[210648]: 2026-01-23 10:06:51.436605523 +0000 UTC m=+0.028739557 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:06:51 compute-0 podman[210648]: 2026-01-23 10:06:51.543145312 +0000 UTC m=+0.135279346 container init 34619c21d036c3acf2e6e05556b82d195e59b40b397c5dc7c988e9964293d226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 10:06:51 compute-0 podman[210648]: 2026-01-23 10:06:51.551547836 +0000 UTC m=+0.143681850 container start 34619c21d036c3acf2e6e05556b82d195e59b40b397c5dc7c988e9964293d226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:06:51 compute-0 podman[210648]: 2026-01-23 10:06:51.558806097 +0000 UTC m=+0.150940121 container attach 34619c21d036c3acf2e6e05556b82d195e59b40b397c5dc7c988e9964293d226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:06:51 compute-0 wizardly_dhawan[210708]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:06:51 compute-0 wizardly_dhawan[210708]: --> All data devices are unavailable
Jan 23 10:06:51 compute-0 systemd[1]: libpod-34619c21d036c3acf2e6e05556b82d195e59b40b397c5dc7c988e9964293d226.scope: Deactivated successfully.
Jan 23 10:06:51 compute-0 podman[210648]: 2026-01-23 10:06:51.940064285 +0000 UTC m=+0.532198299 container died 34619c21d036c3acf2e6e05556b82d195e59b40b397c5dc7c988e9964293d226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:06:51 compute-0 sudo[210804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukaqvhehsfnrqimwlpmlkmrrsxzbafte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162811.4394095-2304-197421256817096/AnsiballZ_stat.py'
Jan 23 10:06:51 compute-0 sudo[210804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c90d9319be07a113e8541060be421281ac381cf0530a30f1b62e878f13ce8c7-merged.mount: Deactivated successfully.
Jan 23 10:06:51 compute-0 podman[210648]: 2026-01-23 10:06:51.995942361 +0000 UTC m=+0.588076375 container remove 34619c21d036c3acf2e6e05556b82d195e59b40b397c5dc7c988e9964293d226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 10:06:52 compute-0 systemd[1]: libpod-conmon-34619c21d036c3acf2e6e05556b82d195e59b40b397c5dc7c988e9964293d226.scope: Deactivated successfully.
Jan 23 10:06:52 compute-0 sudo[210395]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:06:52 compute-0 sudo[210821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:06:52 compute-0 sudo[210821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:52 compute-0 sudo[210821]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:52 compute-0 python3.9[210813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:52 compute-0 sudo[210847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:06:52 compute-0 sudo[210847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:52 compute-0 sudo[210804]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:52 compute-0 sudo[211034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gukhubwizvnqwgtablpusynxxiwyuvob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162811.4394095-2304-197421256817096/AnsiballZ_copy.py'
Jan 23 10:06:52 compute-0 sudo[211034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:52 compute-0 podman[211036]: 2026-01-23 10:06:52.743170882 +0000 UTC m=+0.049839611 container create 72b146019dc1f52d38dea82c43c42611637b7ea82ab98e7e87a5220ac12eca40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:06:52 compute-0 systemd[1]: Started libpod-conmon-72b146019dc1f52d38dea82c43c42611637b7ea82ab98e7e87a5220ac12eca40.scope.
Jan 23 10:06:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100652 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:06:52 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:06:52 compute-0 podman[211036]: 2026-01-23 10:06:52.721349027 +0000 UTC m=+0.028017776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:06:52 compute-0 podman[211036]: 2026-01-23 10:06:52.827876435 +0000 UTC m=+0.134545194 container init 72b146019dc1f52d38dea82c43c42611637b7ea82ab98e7e87a5220ac12eca40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_almeida, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:06:52 compute-0 podman[211036]: 2026-01-23 10:06:52.83664173 +0000 UTC m=+0.143310459 container start 72b146019dc1f52d38dea82c43c42611637b7ea82ab98e7e87a5220ac12eca40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 10:06:52 compute-0 goofy_almeida[211054]: 167 167
Jan 23 10:06:52 compute-0 systemd[1]: libpod-72b146019dc1f52d38dea82c43c42611637b7ea82ab98e7e87a5220ac12eca40.scope: Deactivated successfully.
Jan 23 10:06:52 compute-0 podman[211036]: 2026-01-23 10:06:52.845163268 +0000 UTC m=+0.151832027 container attach 72b146019dc1f52d38dea82c43c42611637b7ea82ab98e7e87a5220ac12eca40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:06:52 compute-0 podman[211036]: 2026-01-23 10:06:52.846209978 +0000 UTC m=+0.152878827 container died 72b146019dc1f52d38dea82c43c42611637b7ea82ab98e7e87a5220ac12eca40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_almeida, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 10:06:52 compute-0 python3.9[211037]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162811.4394095-2304-197421256817096/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-362ea3bf97cd5d912f6af38ea68d525f8e882ced10b54a16510029c3ff3f23d9-merged.mount: Deactivated successfully.
Jan 23 10:06:52 compute-0 sudo[211034]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:52 compute-0 podman[211036]: 2026-01-23 10:06:52.901648041 +0000 UTC m=+0.208316770 container remove 72b146019dc1f52d38dea82c43c42611637b7ea82ab98e7e87a5220ac12eca40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_almeida, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:06:52 compute-0 systemd[1]: libpod-conmon-72b146019dc1f52d38dea82c43c42611637b7ea82ab98e7e87a5220ac12eca40.scope: Deactivated successfully.
Jan 23 10:06:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:53.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:53 compute-0 podman[211120]: 2026-01-23 10:06:53.088192336 +0000 UTC m=+0.049320105 container create 5cf4daa3745d9e1c1af988fedc71ad3353617c47cd2c08893243d843d7c238be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_almeida, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 23 10:06:53 compute-0 systemd[1]: Started libpod-conmon-5cf4daa3745d9e1c1af988fedc71ad3353617c47cd2c08893243d843d7c238be.scope.
Jan 23 10:06:53 compute-0 podman[211120]: 2026-01-23 10:06:53.066125564 +0000 UTC m=+0.027253343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:06:53 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd39da5dd22b8e0136c36d46690846f583aae730735ef2cb087ef75df8be6af5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd39da5dd22b8e0136c36d46690846f583aae730735ef2cb087ef75df8be6af5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd39da5dd22b8e0136c36d46690846f583aae730735ef2cb087ef75df8be6af5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd39da5dd22b8e0136c36d46690846f583aae730735ef2cb087ef75df8be6af5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:53 compute-0 podman[211120]: 2026-01-23 10:06:53.190275415 +0000 UTC m=+0.151403214 container init 5cf4daa3745d9e1c1af988fedc71ad3353617c47cd2c08893243d843d7c238be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_almeida, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 23 10:06:53 compute-0 podman[211120]: 2026-01-23 10:06:53.199688259 +0000 UTC m=+0.160816028 container start 5cf4daa3745d9e1c1af988fedc71ad3353617c47cd2c08893243d843d7c238be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_almeida, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:06:53 compute-0 podman[211120]: 2026-01-23 10:06:53.207151596 +0000 UTC m=+0.168279385 container attach 5cf4daa3745d9e1c1af988fedc71ad3353617c47cd2c08893243d843d7c238be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_almeida, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:06:53 compute-0 ceph-mon[74335]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:06:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:53 compute-0 sudo[211248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjcjgcmokpdiocojbicuiyqegtybfypj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162813.0417812-2304-269364715432237/AnsiballZ_stat.py'
Jan 23 10:06:53 compute-0 sudo[211248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:53 compute-0 stoic_almeida[211170]: {
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:     "1": [
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:         {
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "devices": [
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "/dev/loop3"
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             ],
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "lv_name": "ceph_lv0",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "lv_size": "21470642176",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "name": "ceph_lv0",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "tags": {
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.cluster_name": "ceph",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.crush_device_class": "",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.encrypted": "0",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.osd_id": "1",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.type": "block",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.vdo": "0",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:                 "ceph.with_tpm": "0"
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             },
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "type": "block",
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:             "vg_name": "ceph_vg0"
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:         }
Jan 23 10:06:53 compute-0 stoic_almeida[211170]:     ]
Jan 23 10:06:53 compute-0 stoic_almeida[211170]: }
Jan 23 10:06:53 compute-0 python3.9[211250]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:53 compute-0 systemd[1]: libpod-5cf4daa3745d9e1c1af988fedc71ad3353617c47cd2c08893243d843d7c238be.scope: Deactivated successfully.
Jan 23 10:06:53 compute-0 podman[211120]: 2026-01-23 10:06:53.557552847 +0000 UTC m=+0.518680646 container died 5cf4daa3745d9e1c1af988fedc71ad3353617c47cd2c08893243d843d7c238be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_almeida, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:06:53 compute-0 sudo[211248]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd39da5dd22b8e0136c36d46690846f583aae730735ef2cb087ef75df8be6af5-merged.mount: Deactivated successfully.
Jan 23 10:06:53 compute-0 podman[211120]: 2026-01-23 10:06:53.60408351 +0000 UTC m=+0.565211269 container remove 5cf4daa3745d9e1c1af988fedc71ad3353617c47cd2c08893243d843d7c238be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:06:53 compute-0 systemd[1]: libpod-conmon-5cf4daa3745d9e1c1af988fedc71ad3353617c47cd2c08893243d843d7c238be.scope: Deactivated successfully.
Jan 23 10:06:53 compute-0 sudo[210847]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:53 compute-0 sudo[211310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:06:53 compute-0 sudo[211310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:53 compute-0 sudo[211310]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:53 compute-0 sudo[211359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:06:53 compute-0 sudo[211359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:53 compute-0 sudo[211440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evqyduvyxkaskcgtftcarfgwwqoovpql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162813.0417812-2304-269364715432237/AnsiballZ_copy.py'
Jan 23 10:06:53 compute-0 sudo[211440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:54 compute-0 python3.9[211442]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162813.0417812-2304-269364715432237/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:54 compute-0 sudo[211440]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:54 compute-0 podman[211484]: 2026-01-23 10:06:54.189890067 +0000 UTC m=+0.058079030 container create 4f6735d021302e41a769411dc5dd2e0f63d2577452ee3e783dfb0d2ec1d702bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bassi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:06:54 compute-0 systemd[1]: Started libpod-conmon-4f6735d021302e41a769411dc5dd2e0f63d2577452ee3e783dfb0d2ec1d702bf.scope.
Jan 23 10:06:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:06:54 compute-0 podman[211484]: 2026-01-23 10:06:54.157050892 +0000 UTC m=+0.025239885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:06:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:06:54 compute-0 podman[211484]: 2026-01-23 10:06:54.274787786 +0000 UTC m=+0.142976779 container init 4f6735d021302e41a769411dc5dd2e0f63d2577452ee3e783dfb0d2ec1d702bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bassi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 10:06:54 compute-0 podman[211484]: 2026-01-23 10:06:54.280996837 +0000 UTC m=+0.149185800 container start 4f6735d021302e41a769411dc5dd2e0f63d2577452ee3e783dfb0d2ec1d702bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bassi, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:06:54 compute-0 podman[211484]: 2026-01-23 10:06:54.285952751 +0000 UTC m=+0.154141714 container attach 4f6735d021302e41a769411dc5dd2e0f63d2577452ee3e783dfb0d2ec1d702bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bassi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 10:06:54 compute-0 kind_bassi[211529]: 167 167
Jan 23 10:06:54 compute-0 systemd[1]: libpod-4f6735d021302e41a769411dc5dd2e0f63d2577452ee3e783dfb0d2ec1d702bf.scope: Deactivated successfully.
Jan 23 10:06:54 compute-0 podman[211484]: 2026-01-23 10:06:54.286814536 +0000 UTC m=+0.155003529 container died 4f6735d021302e41a769411dc5dd2e0f63d2577452ee3e783dfb0d2ec1d702bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8532c1eaf71894fedb4fdc0331df652f5b3b86efac5a4f3bd53199d84cd9fbf7-merged.mount: Deactivated successfully.
Jan 23 10:06:54 compute-0 podman[211484]: 2026-01-23 10:06:54.337463599 +0000 UTC m=+0.205652562 container remove 4f6735d021302e41a769411dc5dd2e0f63d2577452ee3e783dfb0d2ec1d702bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:06:54 compute-0 systemd[1]: libpod-conmon-4f6735d021302e41a769411dc5dd2e0f63d2577452ee3e783dfb0d2ec1d702bf.scope: Deactivated successfully.
Jan 23 10:06:54 compute-0 podman[211646]: 2026-01-23 10:06:54.515620591 +0000 UTC m=+0.049585323 container create 91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shockley, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:06:54 compute-0 systemd[1]: Started libpod-conmon-91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5.scope.
Jan 23 10:06:54 compute-0 sudo[211687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aegowvjbeajuwevjpexraznjbpqgpoka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162814.2688558-2304-197113301550304/AnsiballZ_stat.py'
Jan 23 10:06:54 compute-0 sudo[211687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:54 compute-0 podman[211646]: 2026-01-23 10:06:54.493855638 +0000 UTC m=+0.027820390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:06:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97738ab8d670a83ab9c6a7e2300702934ad875623aa30b3a16567828d4f4308/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97738ab8d670a83ab9c6a7e2300702934ad875623aa30b3a16567828d4f4308/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97738ab8d670a83ab9c6a7e2300702934ad875623aa30b3a16567828d4f4308/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f97738ab8d670a83ab9c6a7e2300702934ad875623aa30b3a16567828d4f4308/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:54 compute-0 podman[211646]: 2026-01-23 10:06:54.626172116 +0000 UTC m=+0.160136898 container init 91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 10:06:54 compute-0 podman[211646]: 2026-01-23 10:06:54.6345564 +0000 UTC m=+0.168521152 container start 91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 10:06:54 compute-0 podman[211646]: 2026-01-23 10:06:54.638951768 +0000 UTC m=+0.172916500 container attach 91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 10:06:54 compute-0 python3.9[211695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:54 compute-0 sudo[211687]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:55.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:55 compute-0 sudo[211867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eawybolgdxqsbcbnwczwmvqgcixoldvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162814.2688558-2304-197113301550304/AnsiballZ_copy.py'
Jan 23 10:06:55 compute-0 sudo[211867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:55 compute-0 python3.9[211872]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162814.2688558-2304-197113301550304/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:55.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:55 compute-0 sudo[211867]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:55 compute-0 lvm[211892]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:06:55 compute-0 lvm[211892]: VG ceph_vg0 finished
Jan 23 10:06:55 compute-0 elegant_shockley[211692]: {}
Jan 23 10:06:55 compute-0 systemd[1]: libpod-91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5.scope: Deactivated successfully.
Jan 23 10:06:55 compute-0 systemd[1]: libpod-91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5.scope: Consumed 1.244s CPU time.
Jan 23 10:06:55 compute-0 podman[211646]: 2026-01-23 10:06:55.46838186 +0000 UTC m=+1.002346592 container died 91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f97738ab8d670a83ab9c6a7e2300702934ad875623aa30b3a16567828d4f4308-merged.mount: Deactivated successfully.
Jan 23 10:06:55 compute-0 podman[211646]: 2026-01-23 10:06:55.522384181 +0000 UTC m=+1.056348913 container remove 91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shockley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Jan 23 10:06:55 compute-0 systemd[1]: libpod-conmon-91d9b1a05914953f72f90791bc0f10a52c4fe09158647d14ad35bfc3fa6168d5.scope: Deactivated successfully.
Jan 23 10:06:55 compute-0 sudo[211359]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:06:55 compute-0 sudo[212053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydkozlezvkwtlqagiswmoqwzhhkdoclk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162815.4649646-2304-174119185610174/AnsiballZ_stat.py'
Jan 23 10:06:55 compute-0 sudo[212053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:55 compute-0 ceph-mon[74335]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:06:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:06:55 compute-0 python3.9[212055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:55 compute-0 sudo[212053]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:06:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:56 compute-0 sudo[212057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:06:56 compute-0 sudo[212057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:06:56 compute-0 sudo[212057]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:06:56 compute-0 sudo[212203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvsyvffzudpelkpvknlowbbfrnirqtdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162815.4649646-2304-174119185610174/AnsiballZ_copy.py'
Jan 23 10:06:56 compute-0 sudo[212203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:56 compute-0 python3.9[212205]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162815.4649646-2304-174119185610174/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:56 compute-0 sudo[212203]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:57.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:06:57.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:06:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:06:57 compute-0 ceph-mon[74335]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:06:57 compute-0 sudo[212355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fduiomfaopldbfigpvdfscbxueqglsdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162816.927128-2304-175709803883782/AnsiballZ_stat.py'
Jan 23 10:06:57 compute-0 sudo[212355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:57.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:57 compute-0 python3.9[212357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:57 compute-0 sudo[212355]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:57 compute-0 sudo[212478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjxddfqthnrvcifaubzmozizsaiacsge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162816.927128-2304-175709803883782/AnsiballZ_copy.py'
Jan 23 10:06:57 compute-0 sudo[212478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:57 compute-0 python3.9[212480]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162816.927128-2304-175709803883782/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:57 compute-0 sudo[212478]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:06:58 compute-0 sudo[212632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdaywdmmmhpuldpdmoxmzyhvwepkfbmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162818.0501966-2304-229733477312914/AnsiballZ_stat.py'
Jan 23 10:06:58 compute-0 sudo[212632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:58 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 6.
Jan 23 10:06:58 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:06:58 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.915s CPU time.
Jan 23 10:06:58 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 10:06:58 compute-0 python3.9[212634]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:58 compute-0 sudo[212632]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:58 compute-0 podman[212700]: 2026-01-23 10:06:58.665384169 +0000 UTC m=+0.046079411 container create 1c3f32fbcd628d023aea69847b2e3a97561d6f6a8cf586c68cdfd832d662b66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77922a0e14f419b7a4647e59b5eaf47d7581657ddd7a110d0927d4b235dc5db5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77922a0e14f419b7a4647e59b5eaf47d7581657ddd7a110d0927d4b235dc5db5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77922a0e14f419b7a4647e59b5eaf47d7581657ddd7a110d0927d4b235dc5db5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77922a0e14f419b7a4647e59b5eaf47d7581657ddd7a110d0927d4b235dc5db5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:06:58 compute-0 podman[212700]: 2026-01-23 10:06:58.732506362 +0000 UTC m=+0.113201634 container init 1c3f32fbcd628d023aea69847b2e3a97561d6f6a8cf586c68cdfd832d662b66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:06:58 compute-0 podman[212700]: 2026-01-23 10:06:58.737331422 +0000 UTC m=+0.118026684 container start 1c3f32fbcd628d023aea69847b2e3a97561d6f6a8cf586c68cdfd832d662b66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 10:06:58 compute-0 podman[212700]: 2026-01-23 10:06:58.642995438 +0000 UTC m=+0.023690710 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:06:58 compute-0 bash[212700]: 1c3f32fbcd628d023aea69847b2e3a97561d6f6a8cf586c68cdfd832d662b66a
Jan 23 10:06:58 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:06:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:06:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 10:06:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:06:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 10:06:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:06:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 10:06:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:06:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 10:06:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:06:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 10:06:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:06:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 10:06:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:06:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 10:06:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:06:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:06:58 compute-0 sudo[212853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzdauhrzrbyvquytdehdqhpdbrwpcuzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162818.0501966-2304-229733477312914/AnsiballZ_copy.py'
Jan 23 10:06:58 compute-0 sudo[212853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:06:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:59.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:06:59 compute-0 python3.9[212855]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162818.0501966-2304-229733477312914/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:06:59 compute-0 sudo[212853]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:06:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:06:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:59.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:06:59 compute-0 ceph-mon[74335]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:06:59 compute-0 sudo[213005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqiacnrupobstgqterobabemksawwmpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162819.1854842-2304-123264877863076/AnsiballZ_stat.py'
Jan 23 10:06:59 compute-0 sudo[213005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:59 compute-0 python3.9[213007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:06:59 compute-0 sudo[213005]: pam_unix(sudo:session): session closed for user root
Jan 23 10:06:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:06:59.757 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:06:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:06:59.759 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:06:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:06:59.759 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:06:59 compute-0 sudo[213129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzftmryhmhgtejxwsqpebmxrkcotdzdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162819.1854842-2304-123264877863076/AnsiballZ_copy.py'
Jan 23 10:06:59 compute-0 sudo[213129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:06:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:59] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:06:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:06:59] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:07:00 compute-0 sudo[213132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:07:00 compute-0 sudo[213132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:07:00 compute-0 sudo[213132]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:00 compute-0 python3.9[213131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162819.1854842-2304-123264877863076/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:00 compute-0 sudo[213129]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:07:00 compute-0 sudo[213307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbltaihcoxwaczcbzgpavuposgjluycj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162820.2984092-2304-279507290788090/AnsiballZ_stat.py'
Jan 23 10:07:00 compute-0 sudo[213307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:00 compute-0 python3.9[213309]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:00 compute-0 sudo[213307]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:01.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:01 compute-0 sudo[213430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avzhruhkydvshsthtnxkssxnvmwgbihi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162820.2984092-2304-279507290788090/AnsiballZ_copy.py'
Jan 23 10:07:01 compute-0 sudo[213430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:01.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:01 compute-0 ceph-mon[74335]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:07:01 compute-0 python3.9[213432]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162820.2984092-2304-279507290788090/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:01 compute-0 sudo[213430]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:01 compute-0 sudo[213583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cemhmfwhzfvosmqhdmqsoighjuaklsgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162821.6355774-2304-104971156808125/AnsiballZ_stat.py'
Jan 23 10:07:01 compute-0 sudo[213583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:02 compute-0 python3.9[213585]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:02 compute-0 sudo[213583]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Jan 23 10:07:02 compute-0 sudo[213707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzhwdmozssvcomddycfdgwijiceoqoeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162821.6355774-2304-104971156808125/AnsiballZ_copy.py'
Jan 23 10:07:02 compute-0 sudo[213707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:02 compute-0 python3.9[213709]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162821.6355774-2304-104971156808125/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:02 compute-0 sudo[213707]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:03.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:03 compute-0 podman[213833]: 2026-01-23 10:07:03.096278824 +0000 UTC m=+0.057561010 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 10:07:03 compute-0 python3.9[213870]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:07:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:03.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:03 compute-0 ceph-mon[74335]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Jan 23 10:07:04 compute-0 sudo[214032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwsyjfvkjrpnzxccjeznolbhdmcqulij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162823.5545857-2922-105127821950241/AnsiballZ_seboolean.py'
Jan 23 10:07:04 compute-0 sudo[214032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Jan 23 10:07:04 compute-0 python3.9[214034]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 23 10:07:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:07:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:07:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000019s ======
Jan 23 10:07:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:05.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Jan 23 10:07:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:07:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:07:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:05.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:05 compute-0 ceph-mon[74335]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Jan 23 10:07:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:07:05 compute-0 sudo[214032]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:06 compute-0 sudo[214190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxjrlcjpidadrwgazmmtnnfzzdqvmwgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162825.731391-2946-189786020930387/AnsiballZ_copy.py'
Jan 23 10:07:06 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 23 10:07:06 compute-0 sudo[214190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:06 compute-0 python3.9[214192]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:06 compute-0 sudo[214190]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:07:06 compute-0 sudo[214343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lflwaeycdohaopbxbftrqvmprvkcsvcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162826.360128-2946-132554364186348/AnsiballZ_copy.py'
Jan 23 10:07:06 compute-0 sudo[214343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:06 compute-0 python3.9[214345]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:06 compute-0 sudo[214343]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:06 compute-0 ceph-mon[74335]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:07:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:07:07.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:07:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:07.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:07 compute-0 sudo[214495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijwrvwveqqokssmaqqkqgxhrecphkawv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162826.9678266-2946-281125444612501/AnsiballZ_copy.py'
Jan 23 10:07:07 compute-0 sudo[214495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:07 compute-0 python3.9[214497]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:07 compute-0 sudo[214495]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:07 compute-0 sudo[214648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwyvesiihgqzmaobhiukxvryqdkkrsun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162827.5849128-2946-50785483068518/AnsiballZ_copy.py'
Jan 23 10:07:07 compute-0 sudo[214648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:08 compute-0 python3.9[214650]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:08 compute-0 sudo[214648]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:07:08 compute-0 sudo[214801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezjknknwjjkccvkyslxqemkiriisrxkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162828.1892385-2946-174146409190179/AnsiballZ_copy.py'
Jan 23 10:07:08 compute-0 sudo[214801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:08 compute-0 python3.9[214803]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:08 compute-0 sudo[214801]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000019s ======
Jan 23 10:07:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:09.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Jan 23 10:07:09 compute-0 ceph-mon[74335]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:07:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:09] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:07:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:09] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:07:10 compute-0 sudo[214954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqbonlrbvrqwkindacpsjznmtdoknwha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162829.7772806-3054-179297687968290/AnsiballZ_copy.py'
Jan 23 10:07:10 compute-0 sudo[214954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:10 compute-0 python3.9[214956]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:10 compute-0 sudo[214954]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100710 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:07:10 compute-0 sudo[215107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwwmoomosjjszzeufajdcadygzcbwmtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162830.370258-3054-194330628364252/AnsiballZ_copy.py'
Jan 23 10:07:10 compute-0 sudo[215107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:10 compute-0 python3.9[215109]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:10 compute-0 sudo[215107]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 10:07:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:07:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:11 compute-0 sudo[215272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlmprfmnzzliamxbitjmzwuimnypbcwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162830.9763923-3054-2318439089271/AnsiballZ_copy.py'
Jan 23 10:07:11 compute-0 sudo[215272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:11.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:11 compute-0 python3.9[215274]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:11 compute-0 sudo[215272]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:11 compute-0 ceph-mon[74335]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:07:11 compute-0 sudo[215425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auijvopcubmknmkvutinpejfmbnwvany ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162831.5639398-3054-136776256837327/AnsiballZ_copy.py'
Jan 23 10:07:11 compute-0 sudo[215425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:12 compute-0 python3.9[215427]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:12 compute-0 sudo[215425]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 23 10:07:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:12 compute-0 sudo[215581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anyjxtrjwxwfamrxuuynmewtwtwjuelj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162832.1976192-3054-212982252395506/AnsiballZ_copy.py'
Jan 23 10:07:12 compute-0 sudo[215581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:12 compute-0 python3.9[215583]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:12 compute-0 sudo[215581]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58000da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:12 compute-0 ceph-mon[74335]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 23 10:07:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.831553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162832832018, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 657, "num_deletes": 251, "total_data_size": 1017727, "memory_usage": 1030736, "flush_reason": "Manual Compaction"}
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162832845796, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 986082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17081, "largest_seqno": 17737, "table_properties": {"data_size": 982539, "index_size": 1387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7958, "raw_average_key_size": 19, "raw_value_size": 975535, "raw_average_value_size": 2373, "num_data_blocks": 61, "num_entries": 411, "num_filter_entries": 411, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162786, "oldest_key_time": 1769162786, "file_creation_time": 1769162832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 14107 microseconds, and 5205 cpu microseconds.
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.845864) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 986082 bytes OK
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.845893) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.848929) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.848957) EVENT_LOG_v1 {"time_micros": 1769162832848948, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.848974) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1014305, prev total WAL file size 1014305, number of live WAL files 2.
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.849717) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(962KB)], [35(12MB)]
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162832849873, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 14160190, "oldest_snapshot_seqno": -1}
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4874 keys, 11785595 bytes, temperature: kUnknown
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162832987675, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 11785595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11752237, "index_size": 20064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123266, "raw_average_key_size": 25, "raw_value_size": 11662843, "raw_average_value_size": 2392, "num_data_blocks": 835, "num_entries": 4874, "num_filter_entries": 4874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769162832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.987995) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 11785595 bytes
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.994027) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.7 rd, 85.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.6 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(26.3) write-amplify(12.0) OK, records in: 5389, records dropped: 515 output_compression: NoCompression
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.994061) EVENT_LOG_v1 {"time_micros": 1769162832994045, "job": 16, "event": "compaction_finished", "compaction_time_micros": 137929, "compaction_time_cpu_micros": 31504, "output_level": 6, "num_output_files": 1, "total_output_size": 11785595, "num_input_records": 5389, "num_output_records": 4874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162832994326, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162832996121, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.849537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.996268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.996279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.996281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.996283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:07:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:07:12.996285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:07:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:13.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:13 compute-0 sudo[215733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sztwmylhkoyozrlihcnnibdfwexnieau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162833.0030496-3162-10760011357635/AnsiballZ_systemd.py'
Jan 23 10:07:13 compute-0 sudo[215733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:13.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:13 compute-0 python3.9[215735]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:07:13 compute-0 systemd[1]: Reloading.
Jan 23 10:07:13 compute-0 systemd-rc-local-generator[215756]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:07:13 compute-0 systemd-sysv-generator[215762]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:07:13 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 23 10:07:13 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 23 10:07:13 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 23 10:07:13 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 23 10:07:13 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 23 10:07:14 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 23 10:07:14 compute-0 sudo[215733]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 23 10:07:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:14 compute-0 sudo[215928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afahqhnhklimfjwwsbjpusbveqdsmets ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162834.2382174-3162-239532708661793/AnsiballZ_systemd.py'
Jan 23 10:07:14 compute-0 sudo[215928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100714 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:07:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:14 compute-0 python3.9[215930]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:07:14 compute-0 systemd[1]: Reloading.
Jan 23 10:07:14 compute-0 systemd-sysv-generator[215958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:07:14 compute-0 systemd-rc-local-generator[215952]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:07:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 23 10:07:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 23 10:07:15 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 23 10:07:15 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 23 10:07:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 23 10:07:15 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 23 10:07:15 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 23 10:07:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 23 10:07:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 23 10:07:15 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 23 10:07:15 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 23 10:07:15 compute-0 sudo[215928]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:15 compute-0 ceph-mon[74335]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 23 10:07:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:15.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:15 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 23 10:07:15 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 23 10:07:15 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 23 10:07:15 compute-0 sudo[216151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqwyubztlxmluzvfhfljfshiwgvbrdiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162835.4544747-3162-42646743865923/AnsiballZ_systemd.py'
Jan 23 10:07:15 compute-0 sudo[216151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:16 compute-0 python3.9[216154]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:07:16 compute-0 systemd[1]: Reloading.
Jan 23 10:07:16 compute-0 systemd-rc-local-generator[216182]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:07:16 compute-0 systemd-sysv-generator[216188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:07:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 23 10:07:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:16 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 23 10:07:16 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 23 10:07:16 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 23 10:07:16 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 23 10:07:16 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 23 10:07:16 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 23 10:07:16 compute-0 sudo[216151]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:16 compute-0 setroubleshoot[215966]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 10020b82-d9f9-4374-9337-5929b923926b
Jan 23 10:07:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:16 compute-0 setroubleshoot[215966]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 23 10:07:16 compute-0 setroubleshoot[215966]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 10020b82-d9f9-4374-9337-5929b923926b
Jan 23 10:07:16 compute-0 setroubleshoot[215966]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 23 10:07:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:16 compute-0 sudo[216368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbywbwiaaabqwholhfmjovmwzzvhigkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162836.6725464-3162-272087001565834/AnsiballZ_systemd.py'
Jan 23 10:07:16 compute-0 sudo[216368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:07:17.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:07:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:17.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:17 compute-0 python3.9[216370]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:07:17 compute-0 systemd[1]: Reloading.
Jan 23 10:07:17 compute-0 systemd-sysv-generator[216401]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:07:17 compute-0 systemd-rc-local-generator[216397]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:07:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:17.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:17 compute-0 ceph-mon[74335]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 23 10:07:17 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 23 10:07:17 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 23 10:07:17 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 23 10:07:17 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 23 10:07:17 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 23 10:07:17 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 23 10:07:17 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 23 10:07:17 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 23 10:07:17 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 23 10:07:17 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 23 10:07:17 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 23 10:07:17 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 23 10:07:17 compute-0 sudo[216368]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:18 compute-0 sudo[216591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjzcdfvlmegezoomnehunkoogjnprhzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162837.795854-3162-99895847565034/AnsiballZ_systemd.py'
Jan 23 10:07:18 compute-0 sudo[216591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:18 compute-0 podman[216559]: 2026-01-23 10:07:18.17921131 +0000 UTC m=+0.139675569 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:07:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:07:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:18 compute-0 python3.9[216598]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:07:18 compute-0 systemd[1]: Reloading.
Jan 23 10:07:18 compute-0 systemd-sysv-generator[216646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:07:18 compute-0 systemd-rc-local-generator[216642]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:07:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:18 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 23 10:07:18 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 23 10:07:18 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 23 10:07:18 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 23 10:07:18 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 23 10:07:18 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 23 10:07:18 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 23 10:07:18 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 23 10:07:18 compute-0 sudo[216591]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 23 10:07:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:19.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 23 10:07:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:19.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:19 compute-0 ceph-mon[74335]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:07:19 compute-0 sudo[216824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdoncnqsdgerpjsiokotjorqrxqwkoxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162839.2828786-3273-85975220501765/AnsiballZ_file.py'
Jan 23 10:07:19 compute-0 sudo[216824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:19 compute-0 python3.9[216826]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:19 compute-0 sudo[216824]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:07:19
Jan 23 10:07:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:07:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:07:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', '.nfs', 'backups', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.log', '.mgr']
Jan 23 10:07:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:07:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:19] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:07:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:19] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:07:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:07:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:07:20 compute-0 sudo[216925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:07:20 compute-0 sudo[216925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:07:20 compute-0 sudo[216925]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:07:20 compute-0 sudo[217002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axivlwrecbgawcgaboapntfpmdflxhqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162839.9653237-3297-149476426129258/AnsiballZ_find.py'
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 sudo[217002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:07:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:07:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b740089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:20 compute-0 python3.9[217004]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 10:07:20 compute-0 sudo[217002]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:07:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:20 compute-0 sudo[217155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkfqcfqvjlhrhlbsyqdzwzlulahsusho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162840.6363308-3321-181829523833263/AnsiballZ_command.py'
Jan 23 10:07:20 compute-0 sudo[217155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:21.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:21 compute-0 python3.9[217157]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:07:21 compute-0 sudo[217155]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:21.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:21 compute-0 ceph-mon[74335]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:07:21 compute-0 python3.9[217311]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 10:07:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:07:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b740089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:22 compute-0 python3.9[217463]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000019s ======
Jan 23 10:07:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:23.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Jan 23 10:07:23 compute-0 ceph-mon[74335]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:07:23 compute-0 python3.9[217584]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162842.3213305-3378-1009583607279/.source.xml follow=False _original_basename=secret.xml.j2 checksum=19688f6e42a741164eafec41a84b8e73a76d185a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:23.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:23 compute-0 sudo[217735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dacoitcczzvjhelxtdgngspzovszupwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162843.5513623-3423-177620437036055/AnsiballZ_command.py'
Jan 23 10:07:23 compute-0 sudo[217735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:24 compute-0 python3.9[217737]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f3005f84-239a-55b6-a948-8f1fb592b920
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:07:24 compute-0 polkitd[43358]: Registered Authentication Agent for unix-process:217739:396561 (system bus name :1.2756 [pkttyagent --process 217739 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 23 10:07:24 compute-0 polkitd[43358]: Unregistered Authentication Agent for unix-process:217739:396561 (system bus name :1.2756, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 23 10:07:24 compute-0 polkitd[43358]: Registered Authentication Agent for unix-process:217738:396560 (system bus name :1.2757 [pkttyagent --process 217738 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 23 10:07:24 compute-0 polkitd[43358]: Unregistered Authentication Agent for unix-process:217738:396560 (system bus name :1.2757, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 23 10:07:24 compute-0 sudo[217735]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b740089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:24 compute-0 python3.9[217900]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:25.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:25 compute-0 sudo[218050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnkvctwjupzfpupvjagyvcakhdnsjwxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162845.1123629-3471-177420586024888/AnsiballZ_command.py'
Jan 23 10:07:25 compute-0 sudo[218050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000019s ======
Jan 23 10:07:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:25.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Jan 23 10:07:25 compute-0 ceph-mon[74335]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:25 compute-0 sudo[218050]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:26 compute-0 sudo[218204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijxolovihyjaebpaeqpvuyuxzcrbpiju ; FSID=f3005f84-239a-55b6-a948-8f1fb592b920 KEY=AQB8Q3NpAAAAABAATAj6yCl+1UaIO/yyy7nUXA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162845.7989151-3495-194417555027610/AnsiballZ_command.py'
Jan 23 10:07:26 compute-0 sudo[218204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:26 compute-0 polkitd[43358]: Registered Authentication Agent for unix-process:218208:396786 (system bus name :1.2760 [pkttyagent --process 218208 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 23 10:07:26 compute-0 polkitd[43358]: Unregistered Authentication Agent for unix-process:218208:396786 (system bus name :1.2760, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 23 10:07:26 compute-0 sudo[218204]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b500032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:26 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 23 10:07:26 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.004s CPU time.
Jan 23 10:07:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:26 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 23 10:07:26 compute-0 sudo[218363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wluysbqevqxikofxpfpdlbdltvwhrqxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162846.529985-3519-262348911828016/AnsiballZ_copy.py'
Jan 23 10:07:26 compute-0 sudo[218363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:07:27.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:07:27 compute-0 python3.9[218365]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:07:27.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:07:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:07:27.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:07:27 compute-0 sudo[218363]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000019s ======
Jan 23 10:07:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:27.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Jan 23 10:07:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:27.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:27 compute-0 sudo[218515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-comqkreplxutvrxoaqgdniwawgtribor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162847.2003417-3543-15790728045439/AnsiballZ_stat.py'
Jan 23 10:07:27 compute-0 sudo[218515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:27 compute-0 ceph-mon[74335]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:27 compute-0 python3.9[218517]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:27 compute-0 sudo[218515]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:28 compute-0 sudo[218639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgwuzixnbbjzvwrztiybkdxdgrrebaeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162847.2003417-3543-15790728045439/AnsiballZ_copy.py'
Jan 23 10:07:28 compute-0 sudo[218639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:28 compute-0 python3.9[218641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162847.2003417-3543-15790728045439/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:28 compute-0 sudo[218639]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:28 compute-0 sudo[218792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kewqdramgbnymefnjxzqaeyonmgmyful ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162848.5674012-3591-182892862389679/AnsiballZ_file.py'
Jan 23 10:07:28 compute-0 sudo[218792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b500032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:28 compute-0 python3.9[218794]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:29 compute-0 sudo[218792]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000019s ======
Jan 23 10:07:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:29.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Jan 23 10:07:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 23 10:07:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:29.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 23 10:07:29 compute-0 sudo[218944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvjhqldjugrthnxyxfzluwdczpqahqdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162849.2433221-3615-181580802415398/AnsiballZ_stat.py'
Jan 23 10:07:29 compute-0 sudo[218944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:29 compute-0 ceph-mon[74335]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:29 compute-0 python3.9[218946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:29 compute-0 sudo[218944]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:29 compute-0 sudo[219023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsihustnkgoshfyvnaeetpreqtjanjfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162849.2433221-3615-181580802415398/AnsiballZ_file.py'
Jan 23 10:07:29 compute-0 sudo[219023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:29] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:07:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:29] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:07:30 compute-0 python3.9[219025]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:30 compute-0 sudo[219023]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:30 compute-0 sudo[219176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnkmqfaouyimptnkqzzgckikrcjovpjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162850.4079764-3651-129144026599721/AnsiballZ_stat.py'
Jan 23 10:07:30 compute-0 sudo[219176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:30 compute-0 python3.9[219178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:30 compute-0 sudo[219176]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 23 10:07:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:31.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 23 10:07:31 compute-0 sudo[219254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oibqjocxrnhaoozcjnztppcrlsssiemd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162850.4079764-3651-129144026599721/AnsiballZ_file.py'
Jan 23 10:07:31 compute-0 sudo[219254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:31 compute-0 python3.9[219256]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gsqiwpw1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:31 compute-0 sudo[219254]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 23 10:07:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:31.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 23 10:07:31 compute-0 ceph-mon[74335]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:31 compute-0 sudo[219406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sttcirbeuovqqxevdzhqfxmdyumgueqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162851.4828653-3687-252744138949088/AnsiballZ_stat.py'
Jan 23 10:07:31 compute-0 sudo[219406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:31 compute-0 python3.9[219408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:31 compute-0 sudo[219406]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:32 compute-0 sudo[219485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odvnswitqljugjgbgxtiokvfdxoxulgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162851.4828653-3687-252744138949088/AnsiballZ_file.py'
Jan 23 10:07:32 compute-0 sudo[219485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:07:32 compute-0 python3.9[219487]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:32 compute-0 sudo[219485]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:32 compute-0 sudo[219638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-satbqjuyqlijinaydcyhrusexkaoitjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162852.6785526-3726-210204077521625/AnsiballZ_command.py'
Jan 23 10:07:32 compute-0 sudo[219638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:07:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:07:33 compute-0 python3.9[219640]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:07:33 compute-0 sudo[219638]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:07:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:33.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:07:33 compute-0 podman[219718]: 2026-01-23 10:07:33.537432045 +0000 UTC m=+0.057846299 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 10:07:33 compute-0 ceph-mon[74335]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:07:33 compute-0 sudo[219811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsekxhrjwkzrqjbqebdnojsxnxrlmumy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769162853.3296232-3750-25781844421803/AnsiballZ_edpm_nftables_from_files.py'
Jan 23 10:07:33 compute-0 sudo[219811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:33 compute-0 python3[219813]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 10:07:33 compute-0 sudo[219811]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:34 compute-0 sudo[219965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvpnbtjcgjyuifehczdvcrfznoufhxaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162854.119807-3774-209805675130312/AnsiballZ_stat.py'
Jan 23 10:07:34 compute-0 sudo[219965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:34 compute-0 python3.9[219967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:34 compute-0 sudo[219965]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:34 compute-0 ceph-mon[74335]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:34 compute-0 sudo[220043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eakjimxjrbeoltyheqrboqzyvkktweds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162854.119807-3774-209805675130312/AnsiballZ_file.py'
Jan 23 10:07:34 compute-0 sudo[220043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:07:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:07:35 compute-0 python3.9[220045]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:35 compute-0 sudo[220043]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:07:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:35.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:07:35 compute-0 sudo[220195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgvxfglvwexgeysxvtwensudybtihook ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162855.2236536-3810-226590474014516/AnsiballZ_stat.py'
Jan 23 10:07:35 compute-0 sudo[220195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:35 compute-0 python3.9[220197]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:07:35 compute-0 sudo[220195]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:36 compute-0 sudo[220321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvvugotdndhxkmwarjieixkingxxijaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162855.2236536-3810-226590474014516/AnsiballZ_copy.py'
Jan 23 10:07:36 compute-0 sudo[220321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:36 compute-0 python3.9[220323]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162855.2236536-3810-226590474014516/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:36 compute-0 sudo[220321]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:36 compute-0 sudo[220474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnxlhhtmbuezffbdpgwiyatzyxeyiocz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162856.4583907-3855-151393540992090/AnsiballZ_stat.py'
Jan 23 10:07:36 compute-0 sudo[220474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:36 compute-0 ceph-mon[74335]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:36 compute-0 python3.9[220476]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:36 compute-0 sudo[220474]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:07:37.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:07:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:07:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:07:37 compute-0 sudo[220552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsulxvmwtjlmxzouzgsggjyiihcdsnlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162856.4583907-3855-151393540992090/AnsiballZ_file.py'
Jan 23 10:07:37 compute-0 sudo[220552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:37 compute-0 python3.9[220554]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:37 compute-0 sudo[220552]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:37.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:37 compute-0 sudo[220705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvrmvqsqpdradzbmopquwfcqiyeltpvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162857.527737-3891-231338465505145/AnsiballZ_stat.py'
Jan 23 10:07:37 compute-0 sudo[220705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:38 compute-0 python3.9[220707]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:38 compute-0 sudo[220705]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:38 compute-0 sudo[220784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evzldfqgjmlezmkzxuzpyvcsiacnkbva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162857.527737-3891-231338465505145/AnsiballZ_file.py'
Jan 23 10:07:38 compute-0 sudo[220784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:38 compute-0 python3.9[220786]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:38 compute-0 sudo[220784]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:38 compute-0 sudo[220936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbzcmdsqnmfldpvpuohhgkyncgnhxxte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162858.636691-3927-68520831327779/AnsiballZ_stat.py'
Jan 23 10:07:38 compute-0 sudo[220936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:39 compute-0 python3.9[220938]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:39 compute-0 sudo[220936]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:07:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:39.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:07:39 compute-0 ceph-mon[74335]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:39 compute-0 sudo[221061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqlswquleqtisjqizajjtfgtgmuqersq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162858.636691-3927-68520831327779/AnsiballZ_copy.py'
Jan 23 10:07:39 compute-0 sudo[221061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:39 compute-0 python3.9[221063]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769162858.636691-3927-68520831327779/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:39 compute-0 sudo[221061]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:39] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:07:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:39] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:07:40 compute-0 sudo[221176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:07:40 compute-0 sudo[221176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:07:40 compute-0 sudo[221176]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:40 compute-0 sudo[221239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgcghdbuwdkuvwvnvacfodeqyvhoecjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162859.9880743-3972-75335779393277/AnsiballZ_file.py'
Jan 23 10:07:40 compute-0 sudo[221239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:40 compute-0 python3.9[221241]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:40 compute-0 sudo[221239]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:40 compute-0 sudo[221392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkedjanskajouneiojhkmhflkcnlwujp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162860.6803145-3996-255645847018848/AnsiballZ_command.py'
Jan 23 10:07:40 compute-0 sudo[221392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:41.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:41 compute-0 python3.9[221394]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:07:41 compute-0 sudo[221392]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:41.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:41 compute-0 ceph-mon[74335]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:41 compute-0 sudo[221548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htolxratgkipxpiflyicqueuqpiplfdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162861.3907738-4020-226535704823274/AnsiballZ_blockinfile.py'
Jan 23 10:07:41 compute-0 sudo[221548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:07:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3906 writes, 17K keys, 3902 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 3906 writes, 3902 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1423 writes, 6068 keys, 1419 commit groups, 1.0 writes per commit group, ingest: 10.89 MB, 0.02 MB/s
                                           Interval WAL: 1424 writes, 1420 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     43.3      0.59              0.28         8    0.074       0      0       0.0       0.0
                                             L6      1/0   11.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     51.5     44.0      1.94              0.28         7    0.276     33K   3678       0.0       0.0
                                            Sum      1/0   11.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     39.4     43.9      2.53              0.55        15    0.169     33K   3678       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7     36.1     34.9      1.66              0.19         8    0.207     20K   2323       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     51.5     44.0      1.94              0.28         7    0.276     33K   3678       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     43.5      0.59              0.28         7    0.084       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.025, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.11 GB write, 0.09 MB/s write, 0.10 GB read, 0.08 MB/s read, 2.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5569ddb77350#2 capacity: 304.00 MB usage: 4.92 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000168 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(265,4.63 MB,1.52357%) FilterBlock(16,104.05 KB,0.0334238%) IndexBlock(16,194.59 KB,0.0625108%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 10:07:42 compute-0 python3.9[221550]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:42 compute-0 sudo[221548]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:07:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:42 compute-0 sudo[221701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtauvgdhuohpbreoaqnrlawgwxqlueby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162862.4402535-4047-277412434386792/AnsiballZ_command.py'
Jan 23 10:07:42 compute-0 sudo[221701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:42 compute-0 python3.9[221703]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:07:42 compute-0 sudo[221701]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:07:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:07:43 compute-0 sudo[221854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scarmksadvnmpiczauepgcmnqkryurjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162863.156906-4071-80923100283247/AnsiballZ_stat.py'
Jan 23 10:07:43 compute-0 sudo[221854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:43.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:43 compute-0 python3.9[221856]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:07:43 compute-0 sudo[221854]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:43 compute-0 ceph-mon[74335]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:07:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:44 compute-0 sudo[222012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwikajiwnmzgoebggsndzypbzefztsgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162864.1800778-4095-134944502891003/AnsiballZ_command.py'
Jan 23 10:07:44 compute-0 sudo[222012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:44 compute-0 python3.9[222014]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:07:44 compute-0 sudo[222012]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:44 compute-0 ceph-mon[74335]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:07:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:07:45 compute-0 sudo[222167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgdyinvpsgmhffhcdcueclwicvdfdttp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162864.8608165-4119-276469682587378/AnsiballZ_file.py'
Jan 23 10:07:45 compute-0 sudo[222167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:45 compute-0 python3.9[222169]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:45 compute-0 sudo[222167]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:45.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:45 compute-0 sudo[222320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thxxtaxgwjohinlnzaxawfzbewwljprs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162865.5497565-4143-86287022181593/AnsiballZ_stat.py'
Jan 23 10:07:45 compute-0 sudo[222320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:46 compute-0 python3.9[222322]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:46 compute-0 sudo[222320]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:46 compute-0 sudo[222444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhsvmulriufzpzyssbwcklaroswvyfqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162865.5497565-4143-86287022181593/AnsiballZ_copy.py'
Jan 23 10:07:46 compute-0 sudo[222444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:46 compute-0 python3.9[222446]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162865.5497565-4143-86287022181593/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:46 compute-0 sudo[222444]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:46 compute-0 sudo[222596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymgqcwwbqzizngtvtcmnziupmevakpxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162866.7519343-4188-233916998590251/AnsiballZ_stat.py'
Jan 23 10:07:46 compute-0 sudo[222596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:07:47.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:07:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:07:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:47.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:07:47 compute-0 python3.9[222598]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:47 compute-0 sudo[222596]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:07:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:47.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:07:47 compute-0 sudo[222719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbphcbgsgkdnskajvedwmbrowhmfemge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162866.7519343-4188-233916998590251/AnsiballZ_copy.py'
Jan 23 10:07:47 compute-0 sudo[222719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:47 compute-0 ceph-mon[74335]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:47 compute-0 python3.9[222721]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162866.7519343-4188-233916998590251/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:47 compute-0 sudo[222719]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:48 compute-0 sudo[222883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwyztyjkxmmtsaecjdolihbbmgcxvapy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162868.041619-4233-63637447803004/AnsiballZ_stat.py'
Jan 23 10:07:48 compute-0 sudo[222883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:48 compute-0 podman[222847]: 2026-01-23 10:07:48.356465489 +0000 UTC m=+0.081978921 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 23 10:07:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60001ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:48 compute-0 python3.9[222893]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:07:48 compute-0 sudo[222883]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:48 compute-0 sudo[223023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xemovowwpnejssrouufqpqgkvwrxippm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162868.041619-4233-63637447803004/AnsiballZ_copy.py'
Jan 23 10:07:48 compute-0 sudo[223023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:49 compute-0 python3.9[223025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162868.041619-4233-63637447803004/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:07:49 compute-0 sudo[223023]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:49.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:49.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:49 compute-0 ceph-mon[74335]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:49 compute-0 sudo[223175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljrdshpwgedkfnaulyjyuncqoeyyyaal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162869.3146734-4278-150581661645363/AnsiballZ_systemd.py'
Jan 23 10:07:49 compute-0 sudo[223175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:49 compute-0 python3.9[223177]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:07:49 compute-0 systemd[1]: Reloading.
Jan 23 10:07:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:49] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:07:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:49] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:07:50 compute-0 systemd-rc-local-generator[223206]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:07:50 compute-0 systemd-sysv-generator[223209]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:07:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:07:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:07:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:07:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:07:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:07:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:07:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:07:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:07:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:50 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 23 10:07:50 compute-0 sudo[223175]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60002080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:50 compute-0 sudo[223368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjutqqutqndxiempsdpskzwkweafvkea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162870.5393147-4302-4862772046640/AnsiballZ_systemd.py'
Jan 23 10:07:50 compute-0 sudo[223368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:07:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:07:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:07:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:07:51 compute-0 python3.9[223370]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 10:07:51 compute-0 systemd[1]: Reloading.
Jan 23 10:07:51 compute-0 systemd-sysv-generator[223398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:07:51 compute-0 systemd-rc-local-generator[223391]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:07:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:51.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:51 compute-0 systemd[1]: Reloading.
Jan 23 10:07:51 compute-0 systemd-rc-local-generator[223436]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:07:51 compute-0 systemd-sysv-generator[223439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:07:51 compute-0 sudo[223368]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:51 compute-0 ceph-mon[74335]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:07:52 compute-0 sshd-session[162507]: Connection closed by 192.168.122.30 port 55622
Jan 23 10:07:52 compute-0 sshd-session[162503]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:07:52 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 23 10:07:52 compute-0 systemd[1]: session-53.scope: Consumed 3min 35.341s CPU time.
Jan 23 10:07:52 compute-0 systemd-logind[784]: Session 53 logged out. Waiting for processes to exit.
Jan 23 10:07:52 compute-0 systemd-logind[784]: Removed session 53.
Jan 23 10:07:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:07:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:07:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60002a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:52 compute-0 ceph-mon[74335]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:07:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:53.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:55.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:55 compute-0 ceph-mon[74335]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:55.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:07:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:56 compute-0 sudo[223474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:07:56 compute-0 sudo[223474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:07:56 compute-0 sudo[223474]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:56 compute-0 sudo[223500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:07:56 compute-0 sudo[223500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:07:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100756 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:07:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60002a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:56 compute-0 sudo[223500]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:07:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:07:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:07:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:07:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:07:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:07:57.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:07:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:07:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:07:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:57.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:07:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:07:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:07:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:07:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:07:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:07:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:07:57 compute-0 sudo[223557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:07:57 compute-0 sudo[223557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:07:57 compute-0 sudo[223557]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:57.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:57 compute-0 sudo[223582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:07:57 compute-0 sudo[223582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:07:57 compute-0 ceph-mon[74335]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:07:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:07:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:07:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:07:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:07:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:07:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:07:57 compute-0 sshd-session[223642]: Accepted publickey for zuul from 192.168.122.30 port 35524 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:07:57 compute-0 podman[223650]: 2026-01-23 10:07:57.827423956 +0000 UTC m=+0.043808877 container create bb9fcd162009c77d2c6df4ba3239d2edb3fc7f977aa0fdfeac9153004cb8d776 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:07:57 compute-0 systemd-logind[784]: New session 54 of user zuul.
Jan 23 10:07:57 compute-0 systemd[1]: Started Session 54 of User zuul.
Jan 23 10:07:57 compute-0 systemd[1]: Started libpod-conmon-bb9fcd162009c77d2c6df4ba3239d2edb3fc7f977aa0fdfeac9153004cb8d776.scope.
Jan 23 10:07:57 compute-0 sshd-session[223642]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:07:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:07:57 compute-0 podman[223650]: 2026-01-23 10:07:57.809108701 +0000 UTC m=+0.025493642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:07:57 compute-0 podman[223650]: 2026-01-23 10:07:57.905202045 +0000 UTC m=+0.121586986 container init bb9fcd162009c77d2c6df4ba3239d2edb3fc7f977aa0fdfeac9153004cb8d776 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:07:57 compute-0 podman[223650]: 2026-01-23 10:07:57.911233518 +0000 UTC m=+0.127618439 container start bb9fcd162009c77d2c6df4ba3239d2edb3fc7f977aa0fdfeac9153004cb8d776 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 10:07:57 compute-0 festive_noyce[223667]: 167 167
Jan 23 10:07:57 compute-0 systemd[1]: libpod-bb9fcd162009c77d2c6df4ba3239d2edb3fc7f977aa0fdfeac9153004cb8d776.scope: Deactivated successfully.
Jan 23 10:07:57 compute-0 podman[223650]: 2026-01-23 10:07:57.92001753 +0000 UTC m=+0.136402481 container attach bb9fcd162009c77d2c6df4ba3239d2edb3fc7f977aa0fdfeac9153004cb8d776 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 10:07:57 compute-0 podman[223650]: 2026-01-23 10:07:57.920813212 +0000 UTC m=+0.137198123 container died bb9fcd162009c77d2c6df4ba3239d2edb3fc7f977aa0fdfeac9153004cb8d776 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f54e63fd26c3ce4d46da38e3948508d67451efbf684d8d27e62d71df68fac569-merged.mount: Deactivated successfully.
Jan 23 10:07:57 compute-0 podman[223650]: 2026-01-23 10:07:57.990251663 +0000 UTC m=+0.206636574 container remove bb9fcd162009c77d2c6df4ba3239d2edb3fc7f977aa0fdfeac9153004cb8d776 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:07:57 compute-0 systemd[1]: libpod-conmon-bb9fcd162009c77d2c6df4ba3239d2edb3fc7f977aa0fdfeac9153004cb8d776.scope: Deactivated successfully.
Jan 23 10:07:58 compute-0 podman[223743]: 2026-01-23 10:07:58.151555806 +0000 UTC m=+0.044235939 container create b68c46c33bad7b60dfb252c3cc392fb2bb1cb7edbf7a999b45e8f3e7792980fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:07:58 compute-0 systemd[1]: Started libpod-conmon-b68c46c33bad7b60dfb252c3cc392fb2bb1cb7edbf7a999b45e8f3e7792980fb.scope.
Jan 23 10:07:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8493ec893e4014b1f30ee092f203f3c610c2e10a1fc99337663af00fd59a234c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8493ec893e4014b1f30ee092f203f3c610c2e10a1fc99337663af00fd59a234c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8493ec893e4014b1f30ee092f203f3c610c2e10a1fc99337663af00fd59a234c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8493ec893e4014b1f30ee092f203f3c610c2e10a1fc99337663af00fd59a234c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8493ec893e4014b1f30ee092f203f3c610c2e10a1fc99337663af00fd59a234c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:07:58 compute-0 podman[223743]: 2026-01-23 10:07:58.132631494 +0000 UTC m=+0.025311667 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:07:58 compute-0 podman[223743]: 2026-01-23 10:07:58.238446567 +0000 UTC m=+0.131126720 container init b68c46c33bad7b60dfb252c3cc392fb2bb1cb7edbf7a999b45e8f3e7792980fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:07:58 compute-0 podman[223743]: 2026-01-23 10:07:58.246790686 +0000 UTC m=+0.139470829 container start b68c46c33bad7b60dfb252c3cc392fb2bb1cb7edbf7a999b45e8f3e7792980fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:07:58 compute-0 podman[223743]: 2026-01-23 10:07:58.250215044 +0000 UTC m=+0.142895187 container attach b68c46c33bad7b60dfb252c3cc392fb2bb1cb7edbf7a999b45e8f3e7792980fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 10:07:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:58 compute-0 dazzling_darwin[223760]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:07:58 compute-0 dazzling_darwin[223760]: --> All data devices are unavailable
Jan 23 10:07:58 compute-0 systemd[1]: libpod-b68c46c33bad7b60dfb252c3cc392fb2bb1cb7edbf7a999b45e8f3e7792980fb.scope: Deactivated successfully.
Jan 23 10:07:58 compute-0 podman[223743]: 2026-01-23 10:07:58.60282126 +0000 UTC m=+0.495501403 container died b68c46c33bad7b60dfb252c3cc392fb2bb1cb7edbf7a999b45e8f3e7792980fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8493ec893e4014b1f30ee092f203f3c610c2e10a1fc99337663af00fd59a234c-merged.mount: Deactivated successfully.
Jan 23 10:07:58 compute-0 podman[223743]: 2026-01-23 10:07:58.6847883 +0000 UTC m=+0.577468443 container remove b68c46c33bad7b60dfb252c3cc392fb2bb1cb7edbf7a999b45e8f3e7792980fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 10:07:58 compute-0 systemd[1]: libpod-conmon-b68c46c33bad7b60dfb252c3cc392fb2bb1cb7edbf7a999b45e8f3e7792980fb.scope: Deactivated successfully.
Jan 23 10:07:58 compute-0 sudo[223582]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600038a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:58 compute-0 sudo[223886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:07:58 compute-0 sudo[223886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:07:58 compute-0 sudo[223886]: pam_unix(sudo:session): session closed for user root
Jan 23 10:07:58 compute-0 sudo[223911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:07:58 compute-0 sudo[223911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:07:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:07:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:07:58 compute-0 python3.9[223872]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:07:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:07:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:59.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:07:59 compute-0 podman[223980]: 2026-01-23 10:07:59.25696738 +0000 UTC m=+0.043269031 container create 20d4e8256fa8994617878d0975520730fc917aa30675837bc9dbe65358d6deb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 10:07:59 compute-0 systemd[1]: Started libpod-conmon-20d4e8256fa8994617878d0975520730fc917aa30675837bc9dbe65358d6deb0.scope.
Jan 23 10:07:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:07:59 compute-0 podman[223980]: 2026-01-23 10:07:59.314659223 +0000 UTC m=+0.100960864 container init 20d4e8256fa8994617878d0975520730fc917aa30675837bc9dbe65358d6deb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:07:59 compute-0 podman[223980]: 2026-01-23 10:07:59.32153558 +0000 UTC m=+0.107837211 container start 20d4e8256fa8994617878d0975520730fc917aa30675837bc9dbe65358d6deb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:07:59 compute-0 podman[223980]: 2026-01-23 10:07:59.324171396 +0000 UTC m=+0.110473057 container attach 20d4e8256fa8994617878d0975520730fc917aa30675837bc9dbe65358d6deb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:07:59 compute-0 bold_stonebraker[224001]: 167 167
Jan 23 10:07:59 compute-0 systemd[1]: libpod-20d4e8256fa8994617878d0975520730fc917aa30675837bc9dbe65358d6deb0.scope: Deactivated successfully.
Jan 23 10:07:59 compute-0 podman[223980]: 2026-01-23 10:07:59.327611885 +0000 UTC m=+0.113913526 container died 20d4e8256fa8994617878d0975520730fc917aa30675837bc9dbe65358d6deb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 10:07:59 compute-0 podman[223980]: 2026-01-23 10:07:59.239288083 +0000 UTC m=+0.025589734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-82eea20e008ccc1a6cb2e9b2e74d16a837df3433b5bf002595935c1bff155a2a-merged.mount: Deactivated successfully.
Jan 23 10:07:59 compute-0 podman[223980]: 2026-01-23 10:07:59.358486119 +0000 UTC m=+0.144787751 container remove 20d4e8256fa8994617878d0975520730fc917aa30675837bc9dbe65358d6deb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_stonebraker, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:07:59 compute-0 systemd[1]: libpod-conmon-20d4e8256fa8994617878d0975520730fc917aa30675837bc9dbe65358d6deb0.scope: Deactivated successfully.
Jan 23 10:07:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:07:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:07:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:59.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:07:59 compute-0 ceph-mon[74335]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:07:59 compute-0 podman[224042]: 2026-01-23 10:07:59.516605212 +0000 UTC m=+0.044003163 container create 7bf458b2d4deaf5ec597049a91ff9ef60c0f7565b187903db0d95d6ee149d73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_benz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:07:59 compute-0 systemd[1]: Started libpod-conmon-7bf458b2d4deaf5ec597049a91ff9ef60c0f7565b187903db0d95d6ee149d73f.scope.
Jan 23 10:07:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7253a6c31895bd78a88386777fd3d375b32669199b1818f395fd80aa3583a621/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7253a6c31895bd78a88386777fd3d375b32669199b1818f395fd80aa3583a621/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7253a6c31895bd78a88386777fd3d375b32669199b1818f395fd80aa3583a621/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7253a6c31895bd78a88386777fd3d375b32669199b1818f395fd80aa3583a621/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:07:59 compute-0 podman[224042]: 2026-01-23 10:07:59.591051055 +0000 UTC m=+0.118449006 container init 7bf458b2d4deaf5ec597049a91ff9ef60c0f7565b187903db0d95d6ee149d73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:07:59 compute-0 podman[224042]: 2026-01-23 10:07:59.49702212 +0000 UTC m=+0.024420071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:07:59 compute-0 podman[224042]: 2026-01-23 10:07:59.598447657 +0000 UTC m=+0.125845608 container start 7bf458b2d4deaf5ec597049a91ff9ef60c0f7565b187903db0d95d6ee149d73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_benz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:07:59 compute-0 podman[224042]: 2026-01-23 10:07:59.602837513 +0000 UTC m=+0.130235474 container attach 7bf458b2d4deaf5ec597049a91ff9ef60c0f7565b187903db0d95d6ee149d73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_benz, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 10:07:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:07:59.758 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:07:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:07:59.760 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:07:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:07:59.760 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:07:59 compute-0 amazing_benz[224058]: {
Jan 23 10:07:59 compute-0 amazing_benz[224058]:     "1": [
Jan 23 10:07:59 compute-0 amazing_benz[224058]:         {
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "devices": [
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "/dev/loop3"
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             ],
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "lv_name": "ceph_lv0",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "lv_size": "21470642176",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "name": "ceph_lv0",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "tags": {
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.cluster_name": "ceph",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.crush_device_class": "",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.encrypted": "0",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.osd_id": "1",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.type": "block",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.vdo": "0",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:                 "ceph.with_tpm": "0"
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             },
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "type": "block",
Jan 23 10:07:59 compute-0 amazing_benz[224058]:             "vg_name": "ceph_vg0"
Jan 23 10:07:59 compute-0 amazing_benz[224058]:         }
Jan 23 10:07:59 compute-0 amazing_benz[224058]:     ]
Jan 23 10:07:59 compute-0 amazing_benz[224058]: }
Jan 23 10:07:59 compute-0 systemd[1]: libpod-7bf458b2d4deaf5ec597049a91ff9ef60c0f7565b187903db0d95d6ee149d73f.scope: Deactivated successfully.
Jan 23 10:07:59 compute-0 podman[224042]: 2026-01-23 10:07:59.901215075 +0000 UTC m=+0.428613016 container died 7bf458b2d4deaf5ec597049a91ff9ef60c0f7565b187903db0d95d6ee149d73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_benz, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 10:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7253a6c31895bd78a88386777fd3d375b32669199b1818f395fd80aa3583a621-merged.mount: Deactivated successfully.
Jan 23 10:07:59 compute-0 podman[224042]: 2026-01-23 10:07:59.945858765 +0000 UTC m=+0.473256706 container remove 7bf458b2d4deaf5ec597049a91ff9ef60c0f7565b187903db0d95d6ee149d73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:07:59 compute-0 systemd[1]: libpod-conmon-7bf458b2d4deaf5ec597049a91ff9ef60c0f7565b187903db0d95d6ee149d73f.scope: Deactivated successfully.
Jan 23 10:07:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:59] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:07:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:07:59] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:07:59 compute-0 sudo[223911]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:00 compute-0 sudo[224164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:08:00 compute-0 sudo[224164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:08:00 compute-0 sudo[224164]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:00 compute-0 sudo[224227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:08:00 compute-0 sudo[224227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:08:00 compute-0 sudo[224258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:08:00 compute-0 sudo[224258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:08:00 compute-0 sudo[224258]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:00 compute-0 python3.9[224238]: ansible-ansible.builtin.service_facts Invoked
Jan 23 10:08:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:00 compute-0 network[224307]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 10:08:00 compute-0 network[224308]: 'network-scripts' will be removed from distribution in near future.
Jan 23 10:08:00 compute-0 network[224309]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 10:08:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:00 compute-0 podman[224350]: 2026-01-23 10:08:00.54127709 +0000 UTC m=+0.057027305 container create c1a87014fe6ab5a957b35a4ee6dbd0a8d81ea25895967cdd276a6a30fe51f84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 10:08:00 compute-0 podman[224350]: 2026-01-23 10:08:00.50777659 +0000 UTC m=+0.023526755 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:08:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600038a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:01.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:01 compute-0 systemd[1]: Started libpod-conmon-c1a87014fe6ab5a957b35a4ee6dbd0a8d81ea25895967cdd276a6a30fe51f84f.scope.
Jan 23 10:08:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:08:01 compute-0 podman[224350]: 2026-01-23 10:08:01.272782507 +0000 UTC m=+0.788532712 container init c1a87014fe6ab5a957b35a4ee6dbd0a8d81ea25895967cdd276a6a30fe51f84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kilby, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:08:01 compute-0 podman[224350]: 2026-01-23 10:08:01.281883638 +0000 UTC m=+0.797633783 container start c1a87014fe6ab5a957b35a4ee6dbd0a8d81ea25895967cdd276a6a30fe51f84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:08:01 compute-0 beautiful_kilby[224367]: 167 167
Jan 23 10:08:01 compute-0 systemd[1]: libpod-c1a87014fe6ab5a957b35a4ee6dbd0a8d81ea25895967cdd276a6a30fe51f84f.scope: Deactivated successfully.
Jan 23 10:08:01 compute-0 podman[224350]: 2026-01-23 10:08:01.289614599 +0000 UTC m=+0.805364744 container attach c1a87014fe6ab5a957b35a4ee6dbd0a8d81ea25895967cdd276a6a30fe51f84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:08:01 compute-0 podman[224350]: 2026-01-23 10:08:01.289937918 +0000 UTC m=+0.805688073 container died c1a87014fe6ab5a957b35a4ee6dbd0a8d81ea25895967cdd276a6a30fe51f84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 23 10:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-869fb615f0c55308e1dc6038ec4fff121c45cb24dbc582094f93ab7b700a74cf-merged.mount: Deactivated successfully.
Jan 23 10:08:01 compute-0 podman[224350]: 2026-01-23 10:08:01.324457858 +0000 UTC m=+0.840208003 container remove c1a87014fe6ab5a957b35a4ee6dbd0a8d81ea25895967cdd276a6a30fe51f84f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kilby, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:08:01 compute-0 systemd[1]: libpod-conmon-c1a87014fe6ab5a957b35a4ee6dbd0a8d81ea25895967cdd276a6a30fe51f84f.scope: Deactivated successfully.
Jan 23 10:08:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:01.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:01 compute-0 ceph-mon[74335]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:01 compute-0 podman[224401]: 2026-01-23 10:08:01.492891346 +0000 UTC m=+0.047742320 container create 479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 10:08:01 compute-0 systemd[1]: Started libpod-conmon-479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05.scope.
Jan 23 10:08:01 compute-0 podman[224401]: 2026-01-23 10:08:01.469052232 +0000 UTC m=+0.023903226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:08:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bce3445e3dc8c528c0350897afbbb9f218bb164b8303ece47b4c52131755e71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bce3445e3dc8c528c0350897afbbb9f218bb164b8303ece47b4c52131755e71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bce3445e3dc8c528c0350897afbbb9f218bb164b8303ece47b4c52131755e71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bce3445e3dc8c528c0350897afbbb9f218bb164b8303ece47b4c52131755e71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:08:01 compute-0 podman[224401]: 2026-01-23 10:08:01.602831327 +0000 UTC m=+0.157682321 container init 479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:08:01 compute-0 podman[224401]: 2026-01-23 10:08:01.609377674 +0000 UTC m=+0.164228648 container start 479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:08:01 compute-0 podman[224401]: 2026-01-23 10:08:01.617135267 +0000 UTC m=+0.171986241 container attach 479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 10:08:02 compute-0 lvm[224537]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:08:02 compute-0 lvm[224537]: VG ceph_vg0 finished
Jan 23 10:08:02 compute-0 xenodochial_mendel[224423]: {}
Jan 23 10:08:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:08:02 compute-0 systemd[1]: libpod-479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05.scope: Deactivated successfully.
Jan 23 10:08:02 compute-0 systemd[1]: libpod-479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05.scope: Consumed 1.049s CPU time.
Jan 23 10:08:02 compute-0 podman[224401]: 2026-01-23 10:08:02.295538421 +0000 UTC m=+0.850389425 container died 479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bce3445e3dc8c528c0350897afbbb9f218bb164b8303ece47b4c52131755e71-merged.mount: Deactivated successfully.
Jan 23 10:08:02 compute-0 podman[224401]: 2026-01-23 10:08:02.357124936 +0000 UTC m=+0.911975940 container remove 479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:08:02 compute-0 systemd[1]: libpod-conmon-479cef0f184bd317c5f83147aee6b8e5644e15d6d1d23a45f86bae92f73b6a05.scope: Deactivated successfully.
Jan 23 10:08:02 compute-0 sudo[224227]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:08:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:08:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:08:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:08:02 compute-0 sudo[224565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:08:02 compute-0 sudo[224565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:08:02 compute-0 sudo[224565]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:03.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:03.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:03 compute-0 ceph-mon[74335]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:08:03 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:08:03 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:08:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:08:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:04 compute-0 podman[224662]: 2026-01-23 10:08:04.524078288 +0000 UTC m=+0.053182072 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 10:08:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:04 compute-0 sudo[224806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkzpudzukxlbgjipjigzwvkkbbkpskkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162884.5932684-96-255625894279367/AnsiballZ_setup.py'
Jan 23 10:08:04 compute-0 sudo[224806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:05 compute-0 ceph-mon[74335]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:08:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:05.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:05 compute-0 python3.9[224808]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 10:08:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:08:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:08:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:08:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:05.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:08:05 compute-0 sudo[224806]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:05 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:08:05 compute-0 sudo[224891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qssmhsjwvsjkmihnjygvcxbykucmxwwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162884.5932684-96-255625894279367/AnsiballZ_dnf.py'
Jan 23 10:08:05 compute-0 sudo[224891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:08:06 compute-0 python3.9[224893]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 10:08:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:08:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:08:07.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:08:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:08:07.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:08:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:08:07.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:08:07 compute-0 ceph-mon[74335]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:08:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:07.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:07.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:08:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:08:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:08:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b580010e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:08:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:09.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:08:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:09.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:09 compute-0 ceph-mon[74335]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:08:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:09] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:08:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:09] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:08:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:08:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:10 compute-0 ceph-mon[74335]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:08:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:11.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:11.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:11 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:08:11 compute-0 sudo[224891]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:08:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b580010e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:12 compute-0 sudo[225053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahidpjsxssmdsjlzncfllabkfdmgxgsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162892.0729835-132-128926254035240/AnsiballZ_stat.py'
Jan 23 10:08:12 compute-0 sudo[225053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:12 compute-0 python3.9[225055]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:08:12 compute-0 sudo[225053]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:13.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:13 compute-0 sudo[225205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gidnzblyujdvrskgliowgjduztoyqgon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162892.953277-162-271487888802316/AnsiballZ_command.py'
Jan 23 10:08:13 compute-0 sudo[225205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:13.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:13 compute-0 python3.9[225207]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:08:13 compute-0 sudo[225205]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:13 compute-0 ceph-mon[74335]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:08:14 compute-0 sudo[225359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aytamvjnipvwsfiunlawkjmvclfuhwer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162893.9589605-192-86000099679423/AnsiballZ_stat.py'
Jan 23 10:08:14 compute-0 sudo[225359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:08:14 compute-0 python3.9[225361]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:08:14 compute-0 sudo[225359]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:14 compute-0 ceph-mon[74335]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:08:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58001df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:14 compute-0 sudo[225512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlictpmlqnwzbjftydwtmpslqmtgnukn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162894.6344297-216-54585998774639/AnsiballZ_command.py'
Jan 23 10:08:14 compute-0 sudo[225512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:15 compute-0 python3.9[225514]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:08:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:15.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:15 compute-0 sudo[225512]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:08:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:15.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:08:15 compute-0 sudo[225665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpnmzcjekocpwmbufsxfhbzizmtfrytv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162895.351968-240-102089550000692/AnsiballZ_stat.py'
Jan 23 10:08:15 compute-0 sudo[225665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:15 compute-0 python3.9[225667]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:08:15 compute-0 sudo[225665]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:08:16 compute-0 sudo[225790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kafrrasfwovwuqunaptwcxaxvmofvxmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162895.351968-240-102089550000692/AnsiballZ_copy.py'
Jan 23 10:08:16 compute-0 sudo[225790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:16 compute-0 python3.9[225792]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162895.351968-240-102089550000692/.source.iscsi _original_basename=._ls6s_tt follow=False checksum=bc853f68ba78a8a18224967ff210dd31a75a0530 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:16 compute-0 sudo[225790]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58001df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:08:17.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:08:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:17.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:17 compute-0 sudo[225942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kchobgkmbnpjzamqcavmcoknacemfpuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162896.7386684-285-155712868495946/AnsiballZ_file.py'
Jan 23 10:08:17 compute-0 sudo[225942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:17 compute-0 python3.9[225944]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:17 compute-0 sudo[225942]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:17 compute-0 ceph-mon[74335]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:08:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:17.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:17 compute-0 sudo[226095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxpcosocxfixvzfenmvgqumydtfgipan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162897.551297-309-18333989764521/AnsiballZ_lineinfile.py'
Jan 23 10:08:17 compute-0 sudo[226095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:18 compute-0 python3.9[226097]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:18 compute-0 sudo[226095]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:08:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100818 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:08:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:18 compute-0 podman[226123]: 2026-01-23 10:08:18.580094478 +0000 UTC m=+0.093679141 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 10:08:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58001df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:19 compute-0 sudo[226274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gttamklmoyfngoauqdxvutwcukgeoaiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162898.4934711-336-4292075942643/AnsiballZ_systemd_service.py'
Jan 23 10:08:19 compute-0 sudo[226274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000058s ======
Jan 23 10:08:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:19.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Jan 23 10:08:19 compute-0 python3.9[226276]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:08:19 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 23 10:08:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:08:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:19.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:08:19 compute-0 sudo[226274]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:19 compute-0 ceph-mon[74335]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:08:19 compute-0 sudo[226431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmlaermeeesfbrqsbdftiilregcyxrnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162899.6340685-360-163331871306354/AnsiballZ_systemd_service.py'
Jan 23 10:08:19 compute-0 sudo[226431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:08:19
Jan 23 10:08:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:08:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:08:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['images', 'volumes', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.nfs']
Jan 23 10:08:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:08:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:19] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:08:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:19] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:08:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:08:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:08:20 compute-0 python3.9[226433]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:08:20 compute-0 systemd[1]: Reloading.
Jan 23 10:08:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:08:20 compute-0 systemd-sysv-generator[226489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:08:20 compute-0 systemd-rc-local-generator[226485]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:08:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:20 compute-0 sudo[226438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:08:20 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 23 10:08:20 compute-0 sudo[226438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:08:20 compute-0 sudo[226438]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:20 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 23 10:08:20 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 23 10:08:20 compute-0 systemd[1]: Started Open-iSCSI.
Jan 23 10:08:20 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 23 10:08:20 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 23 10:08:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:20 compute-0 sudo[226431]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:08:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:21.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:21.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:21 compute-0 ceph-mon[74335]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:08:22 compute-0 python3.9[226658]: ansible-ansible.builtin.service_facts Invoked
Jan 23 10:08:22 compute-0 network[226675]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 10:08:22 compute-0 network[226676]: 'network-scripts' will be removed from distribution in near future.
Jan 23 10:08:22 compute-0 network[226677]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 10:08:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:08:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58002ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58002ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:23 compute-0 ceph-mon[74335]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:08:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:23.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:23.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:08:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58002ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:25.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:25 compute-0 ceph-mon[74335]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:08:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:25.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:08:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:26 compute-0 sudo[226952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obvsjeilrlgjcbiembrnnqwulbujabkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162906.3428116-429-173925832606317/AnsiballZ_dnf.py'
Jan 23 10:08:26 compute-0 sudo[226952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:26 compute-0 python3.9[226954]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 10:08:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58002ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:08:27.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:08:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:27.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:27.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:27 compute-0 ceph-mon[74335]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:08:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:29.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:29 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 10:08:29 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 10:08:29 compute-0 systemd[1]: Reloading.
Jan 23 10:08:29 compute-0 systemd-rc-local-generator[227002]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:08:29 compute-0 systemd-sysv-generator[227005]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:08:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:29.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:29 compute-0 ceph-mon[74335]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:29 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 10:08:29 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 10:08:29 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 10:08:29 compute-0 systemd[1]: run-rf88fda081cee4e159585bbc482e63227.service: Deactivated successfully.
Jan 23 10:08:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:29] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Jan 23 10:08:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:29] "GET /metrics HTTP/1.1" 200 48432 "" "Prometheus/2.51.0"
Jan 23 10:08:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:30 compute-0 sudo[226952]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:31.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:31 compute-0 sudo[227271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnjsemjqirciicykxefamzwdxdvwltzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162911.0689309-456-139299287993408/AnsiballZ_file.py'
Jan 23 10:08:31 compute-0 sudo[227271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:31.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:31 compute-0 python3.9[227273]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 23 10:08:31 compute-0 sudo[227271]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:31 compute-0 ceph-mon[74335]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:32 compute-0 sudo[227424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idyhrmtcnjmbqnczwwaaqzdrvnxrovuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162911.7784066-480-195269567162003/AnsiballZ_modprobe.py'
Jan 23 10:08:32 compute-0 sudo[227424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:32 compute-0 python3.9[227426]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 23 10:08:32 compute-0 sudo[227424]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:32 compute-0 ceph-mon[74335]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:32 compute-0 sudo[227581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwsjyfiezwyvbfjjxceiwsoqahydghno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162912.58547-504-182547058646490/AnsiballZ_stat.py'
Jan 23 10:08:32 compute-0 sudo[227581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:33 compute-0 python3.9[227583]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:08:33 compute-0 sudo[227581]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:33.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:33 compute-0 sudo[227704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djmbfjceyxacoikeqvhvenwsxycnzvxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162912.58547-504-182547058646490/AnsiballZ_copy.py'
Jan 23 10:08:33 compute-0 sudo[227704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:33 compute-0 python3.9[227706]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162912.58547-504-182547058646490/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:33 compute-0 sudo[227704]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:34 compute-0 sudo[227857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvwnnbvnqibkiacougotgpptrjktcuhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162913.835281-552-120280940652393/AnsiballZ_lineinfile.py'
Jan 23 10:08:34 compute-0 sudo[227857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:34 compute-0 python3.9[227859]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:34 compute-0 sudo[227857]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:08:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:08:35 compute-0 sudo[228021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efzruytwpsdkzuyadbdmbrogkknkggui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162914.5609548-576-155166955884784/AnsiballZ_systemd.py'
Jan 23 10:08:35 compute-0 sudo[228021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:35 compute-0 podman[227984]: 2026-01-23 10:08:35.102873345 +0000 UTC m=+0.050363121 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 23 10:08:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:35.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:35 compute-0 python3.9[228031]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:08:35 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 23 10:08:35 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 23 10:08:35 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 23 10:08:35 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 23 10:08:35 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 23 10:08:35 compute-0 sudo[228021]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:35.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:35 compute-0 ceph-mon[74335]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:08:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:08:36 compute-0 sudo[228187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jguyzabkhixdsxetyfbfcozcubskzosp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162916.0302284-600-25550584906658/AnsiballZ_command.py'
Jan 23 10:08:36 compute-0 sudo[228187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:36 compute-0 python3.9[228189]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:08:36 compute-0 sudo[228187]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:08:37.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:08:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 10:08:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:37 compute-0 sudo[228340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdscuvlsxdtwsjfqidirvcetyrworsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162916.9197414-630-31215223283300/AnsiballZ_stat.py'
Jan 23 10:08:37 compute-0 sudo[228340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:37 compute-0 python3.9[228342]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:08:37 compute-0 sudo[228340]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:37.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:37 compute-0 ceph-mon[74335]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:08:37 compute-0 sudo[228493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiqpluxzoacoknwexwkjidjdoxcmfizv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162917.701285-657-217218478535550/AnsiballZ_stat.py'
Jan 23 10:08:37 compute-0 sudo[228493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:38 compute-0 python3.9[228495]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:08:38 compute-0 sudo[228493]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:38 compute-0 sudo[228617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joyajmjbscpengapzbrnqplrdrcnvzil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162917.701285-657-217218478535550/AnsiballZ_copy.py'
Jan 23 10:08:38 compute-0 sudo[228617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:38 compute-0 python3.9[228621]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162917.701285-657-217218478535550/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:38 compute-0 sudo[228617]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:38 compute-0 ceph-mon[74335]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:39.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:39 compute-0 sudo[228771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqxeyiyrvwnolmwbfxkivgbjxqqfvnom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162919.0377848-702-83050661948573/AnsiballZ_command.py'
Jan 23 10:08:39 compute-0 sudo[228771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:39 compute-0 python3.9[228773]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:08:39 compute-0 sudo[228771]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:39.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:39 compute-0 sudo[228925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brpqkrywlfcydlthntztvtrawolhctdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162919.697788-726-271900052322515/AnsiballZ_lineinfile.py'
Jan 23 10:08:39 compute-0 sudo[228925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:39] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:08:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:39] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:08:40 compute-0 python3.9[228927]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:40 compute-0 sudo[228925]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:40 compute-0 sudo[229078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ognbdvenjkbqsldlrplheqxtzanollfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162920.4110975-750-8946974255107/AnsiballZ_replace.py'
Jan 23 10:08:40 compute-0 sudo[229078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b480010b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:41 compute-0 sudo[229081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:08:41 compute-0 sudo[229081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:08:41 compute-0 sudo[229081]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:41 compute-0 python3.9[229080]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:41 compute-0 sudo[229078]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:41.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:41 compute-0 ceph-mon[74335]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:08:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:41.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:08:41 compute-0 sudo[229255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfxqngsjndbwyfvbqsymrskdiqberfiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162921.27231-774-190337646811723/AnsiballZ_replace.py'
Jan 23 10:08:41 compute-0 sudo[229255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:41 compute-0 python3.9[229257]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:41 compute-0 sudo[229255]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:42 compute-0 sudo[229409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpuwcpapucmvojzcktnqsrppszfmemye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162922.101763-801-68043733826295/AnsiballZ_lineinfile.py'
Jan 23 10:08:42 compute-0 sudo[229409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:42 compute-0 python3.9[229411]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:42 compute-0 sudo[229409]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:42 compute-0 sudo[229561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sspdvfwycvpuznfeyerbaztddidzsziw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162922.7060204-801-12900393266565/AnsiballZ_lineinfile.py'
Jan 23 10:08:42 compute-0 sudo[229561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:43 compute-0 python3.9[229563]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:43.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:43 compute-0 sudo[229561]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:43 compute-0 ceph-mon[74335]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:43 compute-0 sudo[229713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okiaoryyyvzndlppwpqermnyujgjuwhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162923.2726786-801-279328583575347/AnsiballZ_lineinfile.py'
Jan 23 10:08:43 compute-0 sudo[229713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:43 compute-0 python3.9[229715]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:43 compute-0 sudo[229713]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:44 compute-0 sudo[229866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-witbeiirbgiklvrybcexerpzclmuegpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162923.8384886-801-247043654574835/AnsiballZ_lineinfile.py'
Jan 23 10:08:44 compute-0 sudo[229866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:44 compute-0 python3.9[229868]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:44 compute-0 sudo[229866]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b480010b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:45 compute-0 sudo[230019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvzqrrsivmhxedyuvskdxepprgenfvok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162924.7941067-888-156250248999924/AnsiballZ_stat.py'
Jan 23 10:08:45 compute-0 sudo[230019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:45.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:45 compute-0 python3.9[230021]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:08:45 compute-0 sudo[230019]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:45 compute-0 ceph-mon[74335]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:45 compute-0 sudo[230173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymuuqbnstjtkwuiosawoufqlpxarxcvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162925.4580495-912-58471301477934/AnsiballZ_command.py'
Jan 23 10:08:45 compute-0 sudo[230173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:45 compute-0 python3.9[230175]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:08:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:45 compute-0 sudo[230173]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:08:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:46 compute-0 sudo[230328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twkhjdkqyeldwfwakqlywgsrwcccgydy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162926.2458186-939-152652530371622/AnsiballZ_systemd_service.py'
Jan 23 10:08:46 compute-0 sudo[230328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:46 compute-0 python3.9[230330]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:08:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:46 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 23 10:08:46 compute-0 sudo[230328]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:08:47.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:08:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:47 compute-0 sudo[230484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoatitpwtfwnhorffzwtayqhpwbocwmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162927.1573591-963-245213451186968/AnsiballZ_systemd_service.py'
Jan 23 10:08:47 compute-0 sudo[230484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:47.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:47 compute-0 ceph-mon[74335]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:08:47 compute-0 python3.9[230486]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:08:47 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 23 10:08:47 compute-0 udevadm[230492]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 23 10:08:47 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 23 10:08:47 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 23 10:08:47 compute-0 multipathd[230495]: --------start up--------
Jan 23 10:08:47 compute-0 multipathd[230495]: read /etc/multipath.conf
Jan 23 10:08:47 compute-0 multipathd[230495]: path checkers start up
Jan 23 10:08:47 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 23 10:08:47 compute-0 sudo[230484]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:48 compute-0 ceph-mon[74335]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:48 compute-0 sudo[230666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctxyxmvzgiishelsltorlgjnonnkrtzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162928.4703636-999-239009039905858/AnsiballZ_file.py'
Jan 23 10:08:48 compute-0 sudo[230666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:48 compute-0 podman[230627]: 2026-01-23 10:08:48.782472196 +0000 UTC m=+0.105466809 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 10:08:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:48 compute-0 python3.9[230674]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 23 10:08:48 compute-0 sudo[230666]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:49 compute-0 sudo[230829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buvkomouobficwhbjswcfjdxkediswxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162929.1696784-1023-256586322826612/AnsiballZ_modprobe.py'
Jan 23 10:08:49 compute-0 sudo[230829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:49 compute-0 python3.9[230831]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 23 10:08:49 compute-0 kernel: Key type psk registered
Jan 23 10:08:49 compute-0 sudo[230829]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:49] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:08:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:49] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:08:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:08:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:08:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:08:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:08:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:08:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:08:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:08:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:08:50 compute-0 sudo[230993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqdtvmujkboenhtrxeqlnpixzioucava ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162929.9265099-1047-133756647490193/AnsiballZ_stat.py'
Jan 23 10:08:50 compute-0 sudo[230993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:08:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:50 compute-0 python3.9[230995]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:08:50 compute-0 sudo[230993]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:50 compute-0 sudo[231117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxtnopzgjfevjkxuueebckbddpweqom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162929.9265099-1047-133756647490193/AnsiballZ_copy.py'
Jan 23 10:08:50 compute-0 sudo[231117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:50 compute-0 python3.9[231119]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769162929.9265099-1047-133756647490193/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:50 compute-0 sudo[231117]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:08:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:08:51 compute-0 ceph-mon[74335]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:51 compute-0 sudo[231269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emlifqpazqyubujbmyegcbkonyxdmtoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162931.2366695-1095-53823364626284/AnsiballZ_lineinfile.py'
Jan 23 10:08:51 compute-0 sudo[231269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:08:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:08:51 compute-0 python3.9[231271]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:08:51 compute-0 sudo[231269]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:52 compute-0 sudo[231422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhhyhvldgbvowiatumapyczewikmucnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162931.926153-1119-60860723974035/AnsiballZ_systemd.py'
Jan 23 10:08:52 compute-0 sudo[231422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:52 compute-0 python3.9[231424]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:08:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:52 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 23 10:08:52 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 23 10:08:52 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 23 10:08:52 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 23 10:08:52 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 23 10:08:52 compute-0 sudo[231422]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:53 compute-0 sudo[231579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uijolntjlijdvyqcqfijgrfuuywtygxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162932.8626266-1143-242254078055484/AnsiballZ_dnf.py'
Jan 23 10:08:53 compute-0 sudo[231579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:08:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:53 compute-0 python3.9[231581]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 10:08:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:53.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:53 compute-0 ceph-mon[74335]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:54 compute-0 ceph-mon[74335]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:55.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:08:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:08:55 compute-0 systemd[1]: Reloading.
Jan 23 10:08:55 compute-0 systemd-sysv-generator[231616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:08:55 compute-0 systemd-rc-local-generator[231611]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:08:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:08:55 compute-0 systemd[1]: Reloading.
Jan 23 10:08:56 compute-0 systemd-rc-local-generator[231651]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:08:56 compute-0 systemd-sysv-generator[231655]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:08:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:08:56 compute-0 virtnodedevd[215972]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 23 10:08:56 compute-0 virtnodedevd[215972]: hostname: compute-0
Jan 23 10:08:56 compute-0 virtnodedevd[215972]: nl_recv returned with error: No buffer space available
Jan 23 10:08:56 compute-0 systemd-logind[784]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 23 10:08:56 compute-0 systemd-logind[784]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 23 10:08:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:56 compute-0 lvm[231696]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:08:56 compute-0 lvm[231696]: VG ceph_vg0 finished
Jan 23 10:08:56 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 10:08:56 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 23 10:08:56 compute-0 systemd[1]: Reloading.
Jan 23 10:08:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:56 compute-0 systemd-sysv-generator[231751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:08:56 compute-0 systemd-rc-local-generator[231744]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:08:56 compute-0 ceph-mon[74335]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:08:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:08:57.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:08:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 10:08:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:57.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:08:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:57.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:08:57 compute-0 sudo[231579]: pam_unix(sudo:session): session closed for user root
Jan 23 10:08:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 10:08:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 23 10:08:58 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.624s CPU time.
Jan 23 10:08:58 compute-0 systemd[1]: run-rebb053e457134960bca840a4af676859.service: Deactivated successfully.
Jan 23 10:08:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:08:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:08:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.005000143s ======
Jan 23 10:08:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:59.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000143s
Jan 23 10:08:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:08:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:08:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:59.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:08:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:08:59.760 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:08:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:08:59.762 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:08:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:08:59.762 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:08:59 compute-0 ceph-mon[74335]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:08:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100859 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:08:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:59] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:08:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:08:59] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:09:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:01 compute-0 sudo[233026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:09:01 compute-0 sudo[233026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:01 compute-0 sudo[233026]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:01 compute-0 sudo[233076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cellknlgeqphkwwgianbblkjohguepbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162940.8545353-1167-111197830862186/AnsiballZ_systemd_service.py'
Jan 23 10:09:01 compute-0 sudo[233076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:01.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:01 compute-0 python3.9[233079]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:09:01 compute-0 iscsid[226498]: iscsid shutting down.
Jan 23 10:09:01 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 23 10:09:01 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 23 10:09:01 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 23 10:09:01 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 23 10:09:01 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 23 10:09:01 compute-0 systemd[1]: Started Open-iSCSI.
Jan 23 10:09:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:01.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:01 compute-0 sudo[233076]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:01 compute-0 ceph-mon[74335]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:02 compute-0 sudo[233110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:09:02 compute-0 sudo[233110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:02 compute-0 sudo[233110]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:02 compute-0 sudo[233158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 23 10:09:02 compute-0 sudo[233158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:03 compute-0 ceph-mon[74335]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:03 compute-0 sudo[233158]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:09:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:03.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:03 compute-0 sudo[233307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oepvrscuupmtoqztgzprqupnhiqicnir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162942.7843127-1191-137117095060834/AnsiballZ_systemd_service.py'
Jan 23 10:09:03 compute-0 sudo[233307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:09:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:03.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:03 compute-0 python3.9[233309]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:09:03 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 23 10:09:03 compute-0 multipathd[230495]: exit (signal)
Jan 23 10:09:03 compute-0 multipathd[230495]: --------shut down-------
Jan 23 10:09:03 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 23 10:09:03 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 23 10:09:03 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 23 10:09:03 compute-0 multipathd[233315]: --------start up--------
Jan 23 10:09:03 compute-0 multipathd[233315]: read /etc/multipath.conf
Jan 23 10:09:03 compute-0 multipathd[233315]: path checkers start up
Jan 23 10:09:03 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 23 10:09:03 compute-0 sudo[233307]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:04 compute-0 sudo[233324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:09:04 compute-0 sudo[233324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:04 compute-0 sudo[233324]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:04 compute-0 sudo[233350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:09:04 compute-0 sudo[233350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:04 compute-0 sudo[233350]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:09:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:09:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:09:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:09:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:09:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:09:04 compute-0 python3.9[233541]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 10:09:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:09:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:09:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:09:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:09:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:09:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:09:05 compute-0 sudo[233560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:09:05 compute-0 sudo[233560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:05 compute-0 sudo[233560]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:09:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:09:05 compute-0 sudo[233585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:09:05 compute-0 sudo[233585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:05.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:05 compute-0 podman[233672]: 2026-01-23 10:09:05.451521318 +0000 UTC m=+0.039175972 container create b7e6e8cfa9dc2b772e54fbf2ba28c57a9cc3ad535c99b613083753f00a6327b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:09:05 compute-0 systemd[1]: Started libpod-conmon-b7e6e8cfa9dc2b772e54fbf2ba28c57a9cc3ad535c99b613083753f00a6327b6.scope.
Jan 23 10:09:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:05 compute-0 podman[233672]: 2026-01-23 10:09:05.434260934 +0000 UTC m=+0.021915608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:09:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:09:05 compute-0 podman[233672]: 2026-01-23 10:09:05.560104304 +0000 UTC m=+0.147758978 container init b7e6e8cfa9dc2b772e54fbf2ba28c57a9cc3ad535c99b613083753f00a6327b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shirley, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:09:05 compute-0 podman[233686]: 2026-01-23 10:09:05.561405352 +0000 UTC m=+0.084770377 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 23 10:09:05 compute-0 podman[233672]: 2026-01-23 10:09:05.566461016 +0000 UTC m=+0.154115670 container start b7e6e8cfa9dc2b772e54fbf2ba28c57a9cc3ad535c99b613083753f00a6327b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shirley, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:09:05 compute-0 podman[233672]: 2026-01-23 10:09:05.569515344 +0000 UTC m=+0.157170028 container attach b7e6e8cfa9dc2b772e54fbf2ba28c57a9cc3ad535c99b613083753f00a6327b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shirley, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:09:05 compute-0 elegant_shirley[233697]: 167 167
Jan 23 10:09:05 compute-0 systemd[1]: libpod-b7e6e8cfa9dc2b772e54fbf2ba28c57a9cc3ad535c99b613083753f00a6327b6.scope: Deactivated successfully.
Jan 23 10:09:05 compute-0 podman[233672]: 2026-01-23 10:09:05.573542879 +0000 UTC m=+0.161197533 container died b7e6e8cfa9dc2b772e54fbf2ba28c57a9cc3ad535c99b613083753f00a6327b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 23 10:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5af784991f7941effb81d00c0f3dfc288856ffafa8f3eb6969cd5ec7865375e9-merged.mount: Deactivated successfully.
Jan 23 10:09:05 compute-0 podman[233672]: 2026-01-23 10:09:05.610480086 +0000 UTC m=+0.198134740 container remove b7e6e8cfa9dc2b772e54fbf2ba28c57a9cc3ad535c99b613083753f00a6327b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shirley, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Jan 23 10:09:05 compute-0 systemd[1]: libpod-conmon-b7e6e8cfa9dc2b772e54fbf2ba28c57a9cc3ad535c99b613083753f00a6327b6.scope: Deactivated successfully.
Jan 23 10:09:05 compute-0 ceph-mon[74335]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:09:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:09:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:09:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:09:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:09:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:09:05 compute-0 podman[233730]: 2026-01-23 10:09:05.767941201 +0000 UTC m=+0.047281354 container create 88a8cdb974bb638967f47e0c7f5c0b46e36b94616401a443673d0e03146a5a72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:09:05 compute-0 systemd[1]: Started libpod-conmon-88a8cdb974bb638967f47e0c7f5c0b46e36b94616401a443673d0e03146a5a72.scope.
Jan 23 10:09:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:09:05 compute-0 podman[233730]: 2026-01-23 10:09:05.750006968 +0000 UTC m=+0.029347121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/730851227bd36b4fb8c63e0b6b80605949fe4ff1d7936b831ea910be75999fa6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/730851227bd36b4fb8c63e0b6b80605949fe4ff1d7936b831ea910be75999fa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/730851227bd36b4fb8c63e0b6b80605949fe4ff1d7936b831ea910be75999fa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/730851227bd36b4fb8c63e0b6b80605949fe4ff1d7936b831ea910be75999fa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/730851227bd36b4fb8c63e0b6b80605949fe4ff1d7936b831ea910be75999fa6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:05 compute-0 podman[233730]: 2026-01-23 10:09:05.857710009 +0000 UTC m=+0.137050192 container init 88a8cdb974bb638967f47e0c7f5c0b46e36b94616401a443673d0e03146a5a72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:09:05 compute-0 podman[233730]: 2026-01-23 10:09:05.866859301 +0000 UTC m=+0.146199454 container start 88a8cdb974bb638967f47e0c7f5c0b46e36b94616401a443673d0e03146a5a72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 23 10:09:05 compute-0 podman[233730]: 2026-01-23 10:09:05.870849655 +0000 UTC m=+0.150189808 container attach 88a8cdb974bb638967f47e0c7f5c0b46e36b94616401a443673d0e03146a5a72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_brahmagupta, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:09:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:05.925605) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162945925814, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1198, "num_deletes": 260, "total_data_size": 2149121, "memory_usage": 2184192, "flush_reason": "Manual Compaction"}
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162945948039, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2098879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17739, "largest_seqno": 18935, "table_properties": {"data_size": 2093326, "index_size": 2947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11386, "raw_average_key_size": 18, "raw_value_size": 2082056, "raw_average_value_size": 3402, "num_data_blocks": 132, "num_entries": 612, "num_filter_entries": 612, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162833, "oldest_key_time": 1769162833, "file_creation_time": 1769162945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 22483 microseconds, and 6189 cpu microseconds.
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:05.948142) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2098879 bytes OK
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:05.948172) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:05.950557) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:05.950577) EVENT_LOG_v1 {"time_micros": 1769162945950574, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:05.950597) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2143823, prev total WAL file size 2143823, number of live WAL files 2.
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:05.951446) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323536' seq:0, type:0; will stop at (end)
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2049KB)], [38(11MB)]
Jan 23 10:09:05 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162945951649, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 13884474, "oldest_snapshot_seqno": -1}
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4952 keys, 13427272 bytes, temperature: kUnknown
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162946064816, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13427272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13392361, "index_size": 21425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 126025, "raw_average_key_size": 25, "raw_value_size": 13300507, "raw_average_value_size": 2685, "num_data_blocks": 881, "num_entries": 4952, "num_filter_entries": 4952, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769162945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:06.065241) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13427272 bytes
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:06.067616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.5 rd, 118.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(13.0) write-amplify(6.4) OK, records in: 5486, records dropped: 534 output_compression: NoCompression
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:06.067635) EVENT_LOG_v1 {"time_micros": 1769162946067627, "job": 18, "event": "compaction_finished", "compaction_time_micros": 113384, "compaction_time_cpu_micros": 34312, "output_level": 6, "num_output_files": 1, "total_output_size": 13427272, "num_input_records": 5486, "num_output_records": 4952, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162946068595, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162946070467, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:05.951246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:06.070653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:06.070660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:06.070664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:06.070666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:09:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:09:06.070668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:09:06 compute-0 loving_brahmagupta[233748]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:09:06 compute-0 loving_brahmagupta[233748]: --> All data devices are unavailable
Jan 23 10:09:06 compute-0 systemd[1]: libpod-88a8cdb974bb638967f47e0c7f5c0b46e36b94616401a443673d0e03146a5a72.scope: Deactivated successfully.
Jan 23 10:09:06 compute-0 podman[233730]: 2026-01-23 10:09:06.250428546 +0000 UTC m=+0.529768719 container died 88a8cdb974bb638967f47e0c7f5c0b46e36b94616401a443673d0e03146a5a72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_brahmagupta, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 23 10:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-730851227bd36b4fb8c63e0b6b80605949fe4ff1d7936b831ea910be75999fa6-merged.mount: Deactivated successfully.
Jan 23 10:09:06 compute-0 podman[233730]: 2026-01-23 10:09:06.293279172 +0000 UTC m=+0.572619325 container remove 88a8cdb974bb638967f47e0c7f5c0b46e36b94616401a443673d0e03146a5a72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:09:06 compute-0 systemd[1]: libpod-conmon-88a8cdb974bb638967f47e0c7f5c0b46e36b94616401a443673d0e03146a5a72.scope: Deactivated successfully.
Jan 23 10:09:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:09:06 compute-0 sudo[233585]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:06 compute-0 sudo[233873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:09:06 compute-0 sudo[233873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:06 compute-0 sudo[233873]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:06 compute-0 sudo[233898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:09:06 compute-0 sudo[233898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:06 compute-0 sudo[233960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byvajrtvrlamrvihnipktfwfzeselzxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162945.8870008-1243-237537822184957/AnsiballZ_file.py'
Jan 23 10:09:06 compute-0 sudo[233960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:06 compute-0 podman[233994]: 2026-01-23 10:09:06.869970642 +0000 UTC m=+0.037302879 container create c6015a527c97c4a9499b1f40644a077e4bd04fa08e6e3c911d720c289aa169d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_galileo, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 10:09:06 compute-0 python3.9[233967]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:06 compute-0 systemd[1]: Started libpod-conmon-c6015a527c97c4a9499b1f40644a077e4bd04fa08e6e3c911d720c289aa169d5.scope.
Jan 23 10:09:06 compute-0 sudo[233960]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:09:06 compute-0 ceph-mon[74335]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:09:06 compute-0 podman[233994]: 2026-01-23 10:09:06.947323815 +0000 UTC m=+0.114656052 container init c6015a527c97c4a9499b1f40644a077e4bd04fa08e6e3c911d720c289aa169d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_galileo, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:09:06 compute-0 podman[233994]: 2026-01-23 10:09:06.853290014 +0000 UTC m=+0.020622261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:09:06 compute-0 podman[233994]: 2026-01-23 10:09:06.95517624 +0000 UTC m=+0.122508477 container start c6015a527c97c4a9499b1f40644a077e4bd04fa08e6e3c911d720c289aa169d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 10:09:06 compute-0 podman[233994]: 2026-01-23 10:09:06.95869653 +0000 UTC m=+0.126028767 container attach c6015a527c97c4a9499b1f40644a077e4bd04fa08e6e3c911d720c289aa169d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:09:06 compute-0 hungry_galileo[234010]: 167 167
Jan 23 10:09:06 compute-0 systemd[1]: libpod-c6015a527c97c4a9499b1f40644a077e4bd04fa08e6e3c911d720c289aa169d5.scope: Deactivated successfully.
Jan 23 10:09:06 compute-0 podman[233994]: 2026-01-23 10:09:06.960950565 +0000 UTC m=+0.128282802 container died c6015a527c97c4a9499b1f40644a077e4bd04fa08e6e3c911d720c289aa169d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 10:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-45185502481d5faef047915b7e487085a83ca5ebe51f29d179e1352bcd798448-merged.mount: Deactivated successfully.
Jan 23 10:09:06 compute-0 podman[233994]: 2026-01-23 10:09:06.998342695 +0000 UTC m=+0.165674942 container remove c6015a527c97c4a9499b1f40644a077e4bd04fa08e6e3c911d720c289aa169d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:09:07 compute-0 systemd[1]: libpod-conmon-c6015a527c97c4a9499b1f40644a077e4bd04fa08e6e3c911d720c289aa169d5.scope: Deactivated successfully.
Jan 23 10:09:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:07.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:09:07 compute-0 podman[234057]: 2026-01-23 10:09:07.150995622 +0000 UTC m=+0.042034113 container create 57ac69f809e3b37af9fc5186b551aa583af6d1ed780fbfa51470be59c13f1e3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:09:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:07.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:07 compute-0 systemd[1]: Started libpod-conmon-57ac69f809e3b37af9fc5186b551aa583af6d1ed780fbfa51470be59c13f1e3b.scope.
Jan 23 10:09:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7663bbc2680de342660bea09ee4adb27687dd38fa60b01d05a7ebaa913335f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7663bbc2680de342660bea09ee4adb27687dd38fa60b01d05a7ebaa913335f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7663bbc2680de342660bea09ee4adb27687dd38fa60b01d05a7ebaa913335f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7663bbc2680de342660bea09ee4adb27687dd38fa60b01d05a7ebaa913335f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:07 compute-0 podman[234057]: 2026-01-23 10:09:07.132240046 +0000 UTC m=+0.023278567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:09:07 compute-0 podman[234057]: 2026-01-23 10:09:07.234819281 +0000 UTC m=+0.125857822 container init 57ac69f809e3b37af9fc5186b551aa583af6d1ed780fbfa51470be59c13f1e3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 10:09:07 compute-0 podman[234057]: 2026-01-23 10:09:07.241070069 +0000 UTC m=+0.132108550 container start 57ac69f809e3b37af9fc5186b551aa583af6d1ed780fbfa51470be59c13f1e3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:09:07 compute-0 podman[234057]: 2026-01-23 10:09:07.244137277 +0000 UTC m=+0.135175808 container attach 57ac69f809e3b37af9fc5186b551aa583af6d1ed780fbfa51470be59c13f1e3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:09:07 compute-0 funny_maxwell[234073]: {
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:     "1": [
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:         {
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "devices": [
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "/dev/loop3"
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             ],
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "lv_name": "ceph_lv0",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "lv_size": "21470642176",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "name": "ceph_lv0",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "tags": {
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.cluster_name": "ceph",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.crush_device_class": "",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.encrypted": "0",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.osd_id": "1",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.type": "block",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.vdo": "0",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:                 "ceph.with_tpm": "0"
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             },
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "type": "block",
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:             "vg_name": "ceph_vg0"
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:         }
Jan 23 10:09:07 compute-0 funny_maxwell[234073]:     ]
Jan 23 10:09:07 compute-0 funny_maxwell[234073]: }
Jan 23 10:09:07 compute-0 systemd[1]: libpod-57ac69f809e3b37af9fc5186b551aa583af6d1ed780fbfa51470be59c13f1e3b.scope: Deactivated successfully.
Jan 23 10:09:07 compute-0 podman[234057]: 2026-01-23 10:09:07.525421815 +0000 UTC m=+0.416460306 container died 57ac69f809e3b37af9fc5186b551aa583af6d1ed780fbfa51470be59c13f1e3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:09:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:07.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7663bbc2680de342660bea09ee4adb27687dd38fa60b01d05a7ebaa913335f8-merged.mount: Deactivated successfully.
Jan 23 10:09:07 compute-0 podman[234057]: 2026-01-23 10:09:07.568423605 +0000 UTC m=+0.459462096 container remove 57ac69f809e3b37af9fc5186b551aa583af6d1ed780fbfa51470be59c13f1e3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:09:07 compute-0 systemd[1]: libpod-conmon-57ac69f809e3b37af9fc5186b551aa583af6d1ed780fbfa51470be59c13f1e3b.scope: Deactivated successfully.
Jan 23 10:09:07 compute-0 sudo[233898]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:07 compute-0 sudo[234098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:09:07 compute-0 sudo[234098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:07 compute-0 sudo[234098]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:07 compute-0 sudo[234149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:09:07 compute-0 sudo[234149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:07 compute-0 sudo[234270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rorymdxfmjnoobtuzujztvzmzpljxrnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162947.649024-1276-242338722420584/AnsiballZ_systemd_service.py'
Jan 23 10:09:07 compute-0 sudo[234270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:08 compute-0 podman[234311]: 2026-01-23 10:09:08.10683618 +0000 UTC m=+0.044274777 container create c198d7c3524ee8e8e544d362a9986013d2fc6949a0d2d1a9c0221809ac492901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lamport, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 10:09:08 compute-0 systemd[1]: Started libpod-conmon-c198d7c3524ee8e8e544d362a9986013d2fc6949a0d2d1a9c0221809ac492901.scope.
Jan 23 10:09:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:09:08 compute-0 podman[234311]: 2026-01-23 10:09:08.087406024 +0000 UTC m=+0.024844721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:09:08 compute-0 podman[234311]: 2026-01-23 10:09:08.187093457 +0000 UTC m=+0.124532074 container init c198d7c3524ee8e8e544d362a9986013d2fc6949a0d2d1a9c0221809ac492901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:09:08 compute-0 podman[234311]: 2026-01-23 10:09:08.192933174 +0000 UTC m=+0.130371771 container start c198d7c3524ee8e8e544d362a9986013d2fc6949a0d2d1a9c0221809ac492901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lamport, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:09:08 compute-0 podman[234311]: 2026-01-23 10:09:08.196620809 +0000 UTC m=+0.134059436 container attach c198d7c3524ee8e8e544d362a9986013d2fc6949a0d2d1a9c0221809ac492901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:09:08 compute-0 modest_lamport[234327]: 167 167
Jan 23 10:09:08 compute-0 systemd[1]: libpod-c198d7c3524ee8e8e544d362a9986013d2fc6949a0d2d1a9c0221809ac492901.scope: Deactivated successfully.
Jan 23 10:09:08 compute-0 podman[234311]: 2026-01-23 10:09:08.197932587 +0000 UTC m=+0.135371194 container died c198d7c3524ee8e8e544d362a9986013d2fc6949a0d2d1a9c0221809ac492901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lamport, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:09:08 compute-0 python3.9[234279]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 10:09:08 compute-0 systemd[1]: Reloading.
Jan 23 10:09:08 compute-0 podman[234311]: 2026-01-23 10:09:08.233029641 +0000 UTC m=+0.170468248 container remove c198d7c3524ee8e8e544d362a9986013d2fc6949a0d2d1a9c0221809ac492901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lamport, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:09:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:09:08 compute-0 systemd-rc-local-generator[234370]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:09:08 compute-0 systemd-sysv-generator[234373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:09:08 compute-0 podman[234384]: 2026-01-23 10:09:08.408045017 +0000 UTC m=+0.043187376 container create c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 10:09:08 compute-0 podman[234384]: 2026-01-23 10:09:08.3903219 +0000 UTC m=+0.025464289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:09:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-88361bbf3cf82fdc93bf5a285782d8d4a6d66140e5f69afc58aeb181d99e6bd8-merged.mount: Deactivated successfully.
Jan 23 10:09:08 compute-0 systemd[1]: libpod-conmon-c198d7c3524ee8e8e544d362a9986013d2fc6949a0d2d1a9c0221809ac492901.scope: Deactivated successfully.
Jan 23 10:09:08 compute-0 systemd[1]: Started libpod-conmon-c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701.scope.
Jan 23 10:09:08 compute-0 sudo[234270]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e3ef29a1679270b38f78526a87bcf3af9132ec72e4690bd98da751a41ec8de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e3ef29a1679270b38f78526a87bcf3af9132ec72e4690bd98da751a41ec8de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e3ef29a1679270b38f78526a87bcf3af9132ec72e4690bd98da751a41ec8de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e3ef29a1679270b38f78526a87bcf3af9132ec72e4690bd98da751a41ec8de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:09:08 compute-0 podman[234384]: 2026-01-23 10:09:08.614515035 +0000 UTC m=+0.249657414 container init c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:09:08 compute-0 podman[234384]: 2026-01-23 10:09:08.624041697 +0000 UTC m=+0.259184056 container start c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_leakey, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:09:08 compute-0 podman[234384]: 2026-01-23 10:09:08.62693058 +0000 UTC m=+0.262072969 container attach c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 10:09:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:09:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:09.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:09 compute-0 lvm[234625]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:09:09 compute-0 lvm[234625]: VG ceph_vg0 finished
Jan 23 10:09:09 compute-0 python3.9[234608]: ansible-ansible.builtin.service_facts Invoked
Jan 23 10:09:09 compute-0 intelligent_leakey[234401]: {}
Jan 23 10:09:09 compute-0 network[234645]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 10:09:09 compute-0 network[234646]: 'network-scripts' will be removed from distribution in near future.
Jan 23 10:09:09 compute-0 network[234647]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 10:09:09 compute-0 systemd[1]: libpod-c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701.scope: Deactivated successfully.
Jan 23 10:09:09 compute-0 podman[234384]: 2026-01-23 10:09:09.407541807 +0000 UTC m=+1.042684166 container died c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 10:09:09 compute-0 systemd[1]: libpod-c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701.scope: Consumed 1.271s CPU time.
Jan 23 10:09:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:09.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:09 compute-0 ceph-mon[74335]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:09:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:09] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:09:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:09] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9e3ef29a1679270b38f78526a87bcf3af9132ec72e4690bd98da751a41ec8de-merged.mount: Deactivated successfully.
Jan 23 10:09:10 compute-0 podman[234384]: 2026-01-23 10:09:10.274047667 +0000 UTC m=+1.909190026 container remove c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 23 10:09:10 compute-0 systemd[1]: libpod-conmon-c5df063c329017f2086edbd94c0f73223e2fbf4868bbf0df15e64a8ca1b4a701.scope: Deactivated successfully.
Jan 23 10:09:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:09:10 compute-0 sudo[234149]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:09:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:09:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003d60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:11 compute-0 sudo[234710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:09:11 compute-0 sudo[234710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:11 compute-0 sudo[234710]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:11.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:11 compute-0 ceph-mon[74335]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:09:11 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:11 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:09:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:11.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:11 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:09:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:11 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:09:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:09:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003d60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:09:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:13.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:09:13 compute-0 ceph-mon[74335]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:09:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:13.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:13 compute-0 sudo[234962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ileddjmtpxbumeudgexiktcekbemvxsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162953.4132226-1333-71132876687660/AnsiballZ_systemd_service.py'
Jan 23 10:09:13 compute-0 sudo[234962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:14 compute-0 python3.9[234964]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:09:14 compute-0 sudo[234962]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:09:14 compute-0 sudo[235117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcmslbsjugsnetjgylsnyzloykcgszlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162954.1805663-1333-134780406063733/AnsiballZ_systemd_service.py'
Jan 23 10:09:14 compute-0 sudo[235117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:14 compute-0 python3.9[235119]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:09:14 compute-0 sudo[235117]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003d60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:15 compute-0 sudo[235270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgyrekhanyjidxazjiveqrqwqpdizgwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162954.8989816-1333-73987525389937/AnsiballZ_systemd_service.py'
Jan 23 10:09:15 compute-0 sudo[235270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:15.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:15 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 23 10:09:15 compute-0 python3.9[235272]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:09:15 compute-0 sudo[235270]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:15.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:15 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:09:15 compute-0 ceph-mon[74335]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 23 10:09:15 compute-0 sudo[235425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikionsmsturmdzlwnstgtxitfwncvxwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162955.6272385-1333-20298531345409/AnsiballZ_systemd_service.py'
Jan 23 10:09:15 compute-0 sudo[235425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:16 compute-0 python3.9[235427]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:09:16 compute-0 sudo[235425]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:09:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:16 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 10:09:16 compute-0 sudo[235580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apwpjazgylvnmjplyjuqdakqpopqvfum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162956.4060292-1333-233251107970322/AnsiballZ_systemd_service.py'
Jan 23 10:09:16 compute-0 sudo[235580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:16 compute-0 ceph-mon[74335]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:09:17 compute-0 python3.9[235582]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:09:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:17.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:09:17 compute-0 sudo[235580]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:17.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:17 compute-0 sudo[235733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbuckpnvufqkbeoxwjnfptomxivffbuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162957.1951237-1333-259068570923903/AnsiballZ_systemd_service.py'
Jan 23 10:09:17 compute-0 sudo[235733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:17.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:17 compute-0 python3.9[235735]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:09:17 compute-0 sudo[235733]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:18 compute-0 sudo[235887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncjnlimghztrxnblrqaqwkdrjgaeifsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162957.9111934-1333-259339269254992/AnsiballZ_systemd_service.py'
Jan 23 10:09:18 compute-0 sudo[235887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:09:18 compute-0 python3.9[235889]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:09:18 compute-0 sudo[235887]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003d60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:18 compute-0 sudo[236041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfadmwrmcwmrgcfvbcidzdpdkohmbrcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162958.585632-1333-238792887183009/AnsiballZ_systemd_service.py'
Jan 23 10:09:18 compute-0 sudo[236041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:18 compute-0 podman[236043]: 2026-01-23 10:09:18.945157561 +0000 UTC m=+0.084117713 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 10:09:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:19.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:19 compute-0 python3.9[236044]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:09:19 compute-0 sudo[236041]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:19 compute-0 ceph-mon[74335]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:09:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:19.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/100919 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:09:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:09:19
Jan 23 10:09:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:09:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:09:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'backups', 'images', 'volumes', 'cephfs.cephfs.meta', '.nfs', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.control']
Jan 23 10:09:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:09:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:19] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:09:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:19] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:09:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:09:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:09:20 compute-0 sudo[236222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orgcwjymlfpkyrxnvcjpwawfoefzcvmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162959.9793394-1510-861655740553/AnsiballZ_file.py'
Jan 23 10:09:20 compute-0 sudo[236222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:09:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:09:20 compute-0 python3.9[236224]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:20 compute-0 sudo[236222]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:09:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:20 compute-0 sudo[236375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzsophmtieyqowirnxhgmmhlpaqdushl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162960.551248-1510-234523641240677/AnsiballZ_file.py'
Jan 23 10:09:20 compute-0 sudo[236375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:20 compute-0 python3.9[236377]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:21 compute-0 sudo[236375]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:21.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:21 compute-0 sudo[236425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:09:21 compute-0 sudo[236425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:21 compute-0 sudo[236425]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:21 compute-0 sudo[236552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtcieylpaopzxihkobrubnncmobehjli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162961.1357477-1510-177129565268724/AnsiballZ_file.py'
Jan 23 10:09:21 compute-0 sudo[236552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:21.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:21 compute-0 python3.9[236554]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:21 compute-0 ceph-mon[74335]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:09:21 compute-0 sudo[236552]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:21 compute-0 sudo[236705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvfunqnhezoatvswcxejoqdxvxdpmsxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162961.7182796-1510-54176620412864/AnsiballZ_file.py'
Jan 23 10:09:21 compute-0 sudo[236705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:22 compute-0 python3.9[236707]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:22 compute-0 sudo[236705]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:09:22 compute-0 sudo[236858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkglxubugwilgqgxhdyccckxakbbesby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162962.269686-1510-213266750549575/AnsiballZ_file.py'
Jan 23 10:09:22 compute-0 sudo[236858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:22 compute-0 python3.9[236860]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:22 compute-0 sudo[236858]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003d80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:22 compute-0 ceph-mon[74335]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:09:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:23 compute-0 sudo[237010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wworkgbgveruijjmzbivfcyplhkdkolo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162962.8352282-1510-196221975523988/AnsiballZ_file.py'
Jan 23 10:09:23 compute-0 sudo[237010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:23.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:23 compute-0 python3.9[237012]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:23 compute-0 sudo[237010]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:09:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:23.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:09:23 compute-0 sudo[237162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyfbfsajugtouhquzadeemjfxakffduk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162963.4236453-1510-107296227148824/AnsiballZ_file.py'
Jan 23 10:09:23 compute-0 sudo[237162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:23 compute-0 python3.9[237164]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:23 compute-0 sudo[237162]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:24 compute-0 sudo[237315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ninsoqklbuscdemqbjkjvjtpyeaykowp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162963.9867768-1510-269193588553531/AnsiballZ_file.py'
Jan 23 10:09:24 compute-0 sudo[237315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:09:24 compute-0 python3.9[237317]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:24 compute-0 sudo[237315]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:25.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:25 compute-0 sudo[237468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prfnlyczgvsymbvklyfskahhuyktcleb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162964.9804206-1681-20945206691587/AnsiballZ_file.py'
Jan 23 10:09:25 compute-0 sudo[237468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:25 compute-0 python3.9[237470]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:25 compute-0 sudo[237468]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:25.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:25 compute-0 ceph-mon[74335]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:09:25 compute-0 sudo[237620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbaptbjzbxcccyilkwpiwpbtdheclohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162965.5261238-1681-257042415007589/AnsiballZ_file.py'
Jan 23 10:09:25 compute-0 sudo[237620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:25 compute-0 python3.9[237623]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:25 compute-0 sudo[237620]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:09:26 compute-0 sudo[237774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxekstccicyrjqqvfhpppvouathrhifq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162966.1010535-1681-263308467130956/AnsiballZ_file.py'
Jan 23 10:09:26 compute-0 sudo[237774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:26 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 23 10:09:26 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 23 10:09:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:26 compute-0 python3.9[237778]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:26 compute-0 sudo[237774]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:26 compute-0 sudo[237928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffakplzztacejjuxwaxkccvgenoprclc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162966.6819286-1681-168093395327797/AnsiballZ_file.py'
Jan 23 10:09:26 compute-0 sudo[237928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:27.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:09:27 compute-0 python3.9[237930]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:27 compute-0 sudo[237928]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:27.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:27 compute-0 sudo[238080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymycjyymwzonujojfosezadwghfoyzxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162967.257883-1681-46669710741135/AnsiballZ_file.py'
Jan 23 10:09:27 compute-0 sudo[238080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:27.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:27 compute-0 python3.9[238082]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:27 compute-0 sudo[238080]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:27 compute-0 ceph-mon[74335]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:09:28 compute-0 sudo[238233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqtuicgxuwjorbnzfwyrucioaxairrrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162967.8295536-1681-165268845658259/AnsiballZ_file.py'
Jan 23 10:09:28 compute-0 sudo[238233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:28 compute-0 python3.9[238235]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:28 compute-0 sudo[238233]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:09:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:28 compute-0 sudo[238386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asxjxfrmcdhmhwlvqwitbshgjcmlvgsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162968.3752668-1681-210513229548609/AnsiballZ_file.py'
Jan 23 10:09:28 compute-0 sudo[238386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:28 compute-0 python3.9[238388]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:28 compute-0 sudo[238386]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:29 compute-0 ceph-mon[74335]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:09:29 compute-0 sudo[238538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxbbdvrvmzdhnlzccbfkzpzumdjpdgqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162968.9132442-1681-75348049132299/AnsiballZ_file.py'
Jan 23 10:09:29 compute-0 sudo[238538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:29.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:29 compute-0 python3.9[238540]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:09:29 compute-0 sudo[238538]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:29] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:09:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:29] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:09:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:09:30 compute-0 sudo[238692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fafftgxdexfrpescuxtnqqmimxisnhpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162970.2077258-1855-184359009028135/AnsiballZ_command.py'
Jan 23 10:09:30 compute-0 sudo[238692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:30 compute-0 python3.9[238694]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:09:30 compute-0 sudo[238692]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:31 compute-0 ceph-mon[74335]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:09:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:31.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:31 compute-0 python3.9[238846]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 10:09:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:31.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:32 compute-0 sudo[238997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urdsirqpripbxoopougqjttopjnkwdof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162971.810302-1909-270185695742658/AnsiballZ_systemd_service.py'
Jan 23 10:09:32 compute-0 sudo[238997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:09:32 compute-0 python3.9[238999]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 10:09:32 compute-0 systemd[1]: Reloading.
Jan 23 10:09:32 compute-0 systemd-rc-local-generator[239028]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:09:32 compute-0 systemd-sysv-generator[239031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:09:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:32 compute-0 sudo[238997]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:33.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:33 compute-0 sudo[239186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nviocjwcqlcyeyrshhetvofofjdqhxsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162972.9770412-1933-234751255868073/AnsiballZ_command.py'
Jan 23 10:09:33 compute-0 sudo[239186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:33 compute-0 python3.9[239188]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:09:33 compute-0 ceph-mon[74335]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:09:33 compute-0 sudo[239186]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:33.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:33 compute-0 sudo[239340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtwrtearivruopjasxhaqumibewctbfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162973.589132-1933-1794740240019/AnsiballZ_command.py'
Jan 23 10:09:33 compute-0 sudo[239340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:34 compute-0 python3.9[239342]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:09:34 compute-0 sudo[239340]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:34 compute-0 sudo[239494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiaemzymeljdkwnboodnlhhtizeslpap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162974.1797457-1933-245197268186686/AnsiballZ_command.py'
Jan 23 10:09:34 compute-0 sudo[239494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:34 compute-0 python3.9[239496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:09:34 compute-0 sudo[239494]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:35 compute-0 sudo[239647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncwwwaisbaktxlqxvksjjwnaiqqhvphj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162974.7854993-1933-18370526203390/AnsiballZ_command.py'
Jan 23 10:09:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:09:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:09:35 compute-0 sudo[239647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:35.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:35 compute-0 python3.9[239649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:09:35 compute-0 sudo[239647]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:35 compute-0 ceph-mon[74335]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:09:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:35.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:35 compute-0 sudo[239807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyluvjyietijcpjqurhycqraxqjmbpgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162975.4333663-1933-1975327408243/AnsiballZ_command.py'
Jan 23 10:09:35 compute-0 sudo[239807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:35 compute-0 podman[239774]: 2026-01-23 10:09:35.708160565 +0000 UTC m=+0.059205799 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 10:09:35 compute-0 python3.9[239815]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:09:35 compute-0 sudo[239807]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:36 compute-0 sudo[239974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bliotnltbatwwqmbqxakjqhtbavixary ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162976.0165482-1933-270391141123465/AnsiballZ_command.py'
Jan 23 10:09:36 compute-0 sudo[239974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:09:36 compute-0 python3.9[239976]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:09:36 compute-0 sudo[239974]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:36 compute-0 sudo[240127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiivmjsphrfgpiavcwdaowvgbstcitth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162976.6025646-1933-253151228930194/AnsiballZ_command.py'
Jan 23 10:09:36 compute-0 sudo[240127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:37 compute-0 python3.9[240129]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:09:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:37.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:09:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:37.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:09:37 compute-0 sudo[240127]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:37.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:37 compute-0 sudo[240282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbglqxrmrfvwtquekopvrxjsjejwmekx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162977.180143-1933-42200522754540/AnsiballZ_command.py'
Jan 23 10:09:37 compute-0 sudo[240282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:37 compute-0 python3.9[240284]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 10:09:37 compute-0 ceph-mon[74335]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:09:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:37.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:37 compute-0 sudo[240282]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:38 compute-0 ceph-mon[74335]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:39.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:39.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:39] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:09:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:39] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:09:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64000f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:41.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:41 compute-0 sudo[240371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:09:41 compute-0 sudo[240371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:09:41 compute-0 sudo[240371]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:41 compute-0 sudo[240464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydpumblyzqjevrgryemqbfxpsenblnps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162981.1081235-2140-255900449246462/AnsiballZ_file.py'
Jan 23 10:09:41 compute-0 sudo[240464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:41 compute-0 python3.9[240466]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:41 compute-0 ceph-mon[74335]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:41 compute-0 sudo[240464]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:41.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:09:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7882 writes, 31K keys, 7882 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7882 writes, 1550 syncs, 5.09 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 677 writes, 1212 keys, 677 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 677 writes, 322 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 10:09:42 compute-0 sudo[240617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdzuuyiqlnkkaoygemoybzkbafdwxyem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162981.742448-2140-190285901986055/AnsiballZ_file.py'
Jan 23 10:09:42 compute-0 sudo[240617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:42 compute-0 python3.9[240619]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:42 compute-0 sudo[240617]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:09:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:42 compute-0 sudo[240770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgxottmxpoeajfnbuqczmcghpcnjpsec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162982.3511498-2140-242257968602390/AnsiballZ_file.py'
Jan 23 10:09:42 compute-0 sudo[240770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:42 compute-0 python3.9[240772]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:42 compute-0 sudo[240770]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:43.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:43 compute-0 ceph-mgr[74633]: [dashboard INFO request] [192.168.122.100:33048] [POST] [200] [0.004s] [4.0B] [9c408426-6b13-4a62-90d1-35d2d14a5e94] /api/prometheus_receiver
Jan 23 10:09:43 compute-0 sudo[240922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkjtheltzzmxjvzaipzsvewhmfqdgkyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162983.3689172-2206-57866102192389/AnsiballZ_file.py'
Jan 23 10:09:43 compute-0 sudo[240922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:43.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:43 compute-0 python3.9[240924]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:43 compute-0 sudo[240922]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:44 compute-0 ceph-mon[74335]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:09:44 compute-0 sudo[241075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqcifkspsawkzrsxbpadatmunacvzlee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162983.9688678-2206-108247757351733/AnsiballZ_file.py'
Jan 23 10:09:44 compute-0 sudo[241075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:44 compute-0 python3.9[241077]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:44 compute-0 sudo[241075]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:44 compute-0 sudo[241228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgrjtjmlvglphqkmilcrgskqkgcftdki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162984.5711854-2206-102279097911273/AnsiballZ_file.py'
Jan 23 10:09:44 compute-0 sudo[241228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:44 compute-0 python3.9[241230]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:45 compute-0 sudo[241228]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:45 compute-0 ceph-mon[74335]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:45.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:45.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:45 compute-0 sudo[241380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjhkkthqngsdcsnphjbmojxekbglxwmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162985.1409426-2206-137226882138763/AnsiballZ_file.py'
Jan 23 10:09:45 compute-0 sudo[241380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:45 compute-0 python3.9[241382]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:45 compute-0 sudo[241380]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:46 compute-0 sudo[241534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddmuflohxayazplcfhfmbrjnknrhjsvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162986.0351362-2206-91346173853642/AnsiballZ_file.py'
Jan 23 10:09:46 compute-0 sudo[241534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:09:46 compute-0 python3.9[241536]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:46 compute-0 sudo[241534]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:46 compute-0 sudo[241686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spaogahdiujmahoylcnpzbzzemfqxytt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162986.6503038-2206-223012382170212/AnsiballZ_file.py'
Jan 23 10:09:46 compute-0 sudo[241686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:47.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:09:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:47.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:09:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:47.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:09:47 compute-0 python3.9[241688]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:47 compute-0 sudo[241686]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:47.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:47 compute-0 sudo[241838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkznmtpdframyznzcmbdxcacrutasaqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162987.2406702-2206-172650792789793/AnsiballZ_file.py'
Jan 23 10:09:47 compute-0 sudo[241838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:47.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:47 compute-0 ceph-mon[74335]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:09:47 compute-0 python3.9[241840]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:47 compute-0 sudo[241838]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:48 compute-0 ceph-mon[74335]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:49 compute-0 podman[241867]: 2026-01-23 10:09:49.567311662 +0000 UTC m=+0.085861543 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:09:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:09:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:49.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:09:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:49] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:09:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:49] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:09:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:09:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:09:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:09:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:09:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:09:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:09:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:09:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:09:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:09:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:51 compute-0 ceph-mon[74335]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:51.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:51.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:09:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:53.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:53 compute-0 ceph-mon[74335]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:09:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:53.565Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:09:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:09:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:53.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:09:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:54 compute-0 sudo[242025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvxggrlcnbxvuysdigwzytqfjnhumkva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162994.1786444-2531-260376204074323/AnsiballZ_getent.py'
Jan 23 10:09:54 compute-0 sudo[242025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:54 compute-0 python3.9[242027]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 23 10:09:54 compute-0 sudo[242025]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:09:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:55.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:09:55 compute-0 ceph-mon[74335]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:55 compute-0 sudo[242178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbruwqjxikwxhtqgktwbtfgfifdbdohd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162995.11395-2555-209508343741245/AnsiballZ_group.py'
Jan 23 10:09:55 compute-0 sudo[242178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:55.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:55 compute-0 python3.9[242180]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 10:09:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:09:56 compute-0 groupadd[242182]: group added to /etc/group: name=nova, GID=42436
Jan 23 10:09:56 compute-0 groupadd[242182]: group added to /etc/gshadow: name=nova
Jan 23 10:09:56 compute-0 groupadd[242182]: new group: name=nova, GID=42436
Jan 23 10:09:56 compute-0 sudo[242178]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:09:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:56 compute-0 ceph-mon[74335]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:09:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:56 compute-0 sudo[242338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dafcaotvnksfxuqfrzrrlxeuizvahoxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769162996.469304-2579-172323598686238/AnsiballZ_user.py'
Jan 23 10:09:56 compute-0 sudo[242338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:09:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:09:57.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:09:57 compute-0 python3.9[242340]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 10:09:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:57.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:57 compute-0 useradd[242342]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 23 10:09:57 compute-0 useradd[242342]: add 'nova' to group 'libvirt'
Jan 23 10:09:57 compute-0 useradd[242342]: add 'nova' to shadow group 'libvirt'
Jan 23 10:09:57 compute-0 sudo[242338]: pam_unix(sudo:session): session closed for user root
Jan 23 10:09:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:57.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:58 compute-0 sshd-session[242374]: Accepted publickey for zuul from 192.168.122.30 port 50866 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:09:58 compute-0 systemd-logind[784]: New session 55 of user zuul.
Jan 23 10:09:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:58 compute-0 systemd[1]: Started Session 55 of User zuul.
Jan 23 10:09:58 compute-0 sshd-session[242374]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:09:58 compute-0 sshd-session[242378]: Received disconnect from 192.168.122.30 port 50866:11: disconnected by user
Jan 23 10:09:58 compute-0 sshd-session[242378]: Disconnected from user zuul 192.168.122.30 port 50866
Jan 23 10:09:58 compute-0 sshd-session[242374]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:09:58 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Jan 23 10:09:58 compute-0 systemd-logind[784]: Session 55 logged out. Waiting for processes to exit.
Jan 23 10:09:58 compute-0 systemd-logind[784]: Removed session 55.
Jan 23 10:09:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:09:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:09:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:09:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:59.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:09:59 compute-0 python3.9[242528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:09:59 compute-0 ceph-mon[74335]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:09:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:09:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:09:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:59.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:09:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:09:59.761 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:09:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:09:59.763 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:09:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:09:59.763 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:09:59 compute-0 python3.9[242649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769162998.966924-2654-281060869672925/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:09:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:59] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:09:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:09:59] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:10:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:10:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:00 compute-0 python3.9[242801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:10:00 compute-0 ceph-mon[74335]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:10:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:00 compute-0 python3.9[242877]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:10:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:01.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:01 compute-0 sudo[243016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:10:01 compute-0 sudo[243016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:01 compute-0 sudo[243016]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:01 compute-0 ceph-mon[74335]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:01 compute-0 python3.9[243040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:10:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:01.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:02 compute-0 python3.9[243174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769163001.0135748-2654-264162520821932/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:10:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:02 compute-0 python3.9[243325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:10:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:03 compute-0 python3.9[243446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769163002.2509549-2654-48353174227680/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:10:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:03.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:10:03 compute-0 ceph-mon[74335]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:03.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:04 compute-0 python3.9[243597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:10:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:04 compute-0 python3.9[243719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769163003.5752456-2654-67113296773028/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:10:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:10:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:10:05 compute-0 python3.9[243870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:10:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:05.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:05.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:05 compute-0 ceph-mon[74335]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:10:05 compute-0 python3.9[243991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769163004.719821-2654-169069230726916/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:10:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:10:06 compute-0 podman[244018]: 2026-01-23 10:10:06.53203131 +0000 UTC m=+0.061564087 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Jan 23 10:10:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:06 compute-0 ceph-mon[74335]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:10:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:07 compute-0 sudo[244162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzvtbsrxappqicojllwitfzooluofyaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163006.7846725-2903-89458335830012/AnsiballZ_file.py'
Jan 23 10:10:07 compute-0 sudo[244162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:07.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:10:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:07.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:10:07 compute-0 python3.9[244164]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:10:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:07.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:07 compute-0 sudo[244162]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:07.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:07 compute-0 sudo[244314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvxjrvuvhbtkwvbohqcdlnqsvradbxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163007.468604-2927-188148606038042/AnsiballZ_copy.py'
Jan 23 10:10:07 compute-0 sudo[244314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:07 compute-0 python3.9[244317]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:10:07 compute-0 sudo[244314]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:08 compute-0 sudo[244471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odnfzutphgfuhzhpgumgkvggesbavgyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163008.3139794-2951-166379528862633/AnsiballZ_stat.py'
Jan 23 10:10:08 compute-0 sudo[244471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:08 compute-0 python3.9[244473]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:10:08 compute-0 sudo[244471]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:09.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:09 compute-0 sudo[244623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltfqefmaqiqmcdufzbfpqdppeyozeqfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163009.022434-2975-54765300196157/AnsiballZ_stat.py'
Jan 23 10:10:09 compute-0 sudo[244623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:09 compute-0 ceph-mon[74335]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:09 compute-0 python3.9[244625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:10:09 compute-0 sudo[244623]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:09.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:09 compute-0 sudo[244747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hditlfzpxuxpsgznntseuuqzzskghigf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163009.022434-2975-54765300196157/AnsiballZ_copy.py'
Jan 23 10:10:09 compute-0 sudo[244747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:09] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:10:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:09] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:10:09 compute-0 python3.9[244749]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769163009.022434-2975-54765300196157/.source _original_basename=.pgg7f6sz follow=False checksum=a23e1b68fb689b94a8020de9f385ac1bad4264a3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 23 10:10:10 compute-0 sudo[244747]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600025b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:10 compute-0 python3.9[244902]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:10:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:10:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:11.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:10:11 compute-0 sudo[245028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:10:11 compute-0 sudo[245028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:11 compute-0 sudo[245028]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:11 compute-0 sudo[245079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:10:11 compute-0 sudo[245079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:11 compute-0 ceph-mon[74335]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:11 compute-0 python3.9[245082]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:10:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:11.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:10:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:10:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:11 compute-0 sudo[245079]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:12 compute-0 python3.9[245245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769163011.1638713-3053-215465029842177/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=53b8456782b81b5794d3eef3fadcfb00db1088a8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:10:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:10:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:10:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:10:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:10:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:10:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:10:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:10:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:12 compute-0 sudo[245358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:10:12 compute-0 sudo[245358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:12 compute-0 sudo[245358]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:12 compute-0 sudo[245383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:10:12 compute-0 sudo[245383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:12 compute-0 ceph-mon[74335]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:10:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600025b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:12 compute-0 python3.9[245458]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 10:10:13 compute-0 podman[245501]: 2026-01-23 10:10:13.02719005 +0000 UTC m=+0.038787814 container create b5d1be515a7fb7b3d43e68d20fb2b7f345c3d1662bb4739563d456d0a693d8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:10:13 compute-0 systemd[1]: Started libpod-conmon-b5d1be515a7fb7b3d43e68d20fb2b7f345c3d1662bb4739563d456d0a693d8c6.scope.
Jan 23 10:10:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:10:13 compute-0 podman[245501]: 2026-01-23 10:10:13.010007047 +0000 UTC m=+0.021604831 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:10:13 compute-0 podman[245501]: 2026-01-23 10:10:13.120291843 +0000 UTC m=+0.131889607 container init b5d1be515a7fb7b3d43e68d20fb2b7f345c3d1662bb4739563d456d0a693d8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:10:13 compute-0 podman[245501]: 2026-01-23 10:10:13.127486089 +0000 UTC m=+0.139083853 container start b5d1be515a7fb7b3d43e68d20fb2b7f345c3d1662bb4739563d456d0a693d8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:10:13 compute-0 podman[245501]: 2026-01-23 10:10:13.13101015 +0000 UTC m=+0.142607914 container attach b5d1be515a7fb7b3d43e68d20fb2b7f345c3d1662bb4739563d456d0a693d8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 10:10:13 compute-0 eager_hoover[245541]: 167 167
Jan 23 10:10:13 compute-0 systemd[1]: libpod-b5d1be515a7fb7b3d43e68d20fb2b7f345c3d1662bb4739563d456d0a693d8c6.scope: Deactivated successfully.
Jan 23 10:10:13 compute-0 podman[245501]: 2026-01-23 10:10:13.136639242 +0000 UTC m=+0.148237006 container died b5d1be515a7fb7b3d43e68d20fb2b7f345c3d1662bb4739563d456d0a693d8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hoover, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-84e4fd479fa11133a9b4fe487335a1dd6f53b5f5326f5853e3fc54cc8ee6d86a-merged.mount: Deactivated successfully.
Jan 23 10:10:13 compute-0 podman[245501]: 2026-01-23 10:10:13.184130416 +0000 UTC m=+0.195728180 container remove b5d1be515a7fb7b3d43e68d20fb2b7f345c3d1662bb4739563d456d0a693d8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:10:13 compute-0 systemd[1]: libpod-conmon-b5d1be515a7fb7b3d43e68d20fb2b7f345c3d1662bb4739563d456d0a693d8c6.scope: Deactivated successfully.
Jan 23 10:10:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:13.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:13 compute-0 podman[245640]: 2026-01-23 10:10:13.352093549 +0000 UTC m=+0.043359826 container create c13710c0e08c4be570b5ab4cf7a71ebce55fd77fdfffd33765ed6e39c807f582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_thompson, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 23 10:10:13 compute-0 systemd[1]: Started libpod-conmon-c13710c0e08c4be570b5ab4cf7a71ebce55fd77fdfffd33765ed6e39c807f582.scope.
Jan 23 10:10:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe209b27a1325f7b6b51a0e5352e0296fb78879efe7afefcf2a990d7cebebcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe209b27a1325f7b6b51a0e5352e0296fb78879efe7afefcf2a990d7cebebcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe209b27a1325f7b6b51a0e5352e0296fb78879efe7afefcf2a990d7cebebcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe209b27a1325f7b6b51a0e5352e0296fb78879efe7afefcf2a990d7cebebcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe209b27a1325f7b6b51a0e5352e0296fb78879efe7afefcf2a990d7cebebcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:13 compute-0 podman[245640]: 2026-01-23 10:10:13.438216252 +0000 UTC m=+0.129482559 container init c13710c0e08c4be570b5ab4cf7a71ebce55fd77fdfffd33765ed6e39c807f582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 10:10:13 compute-0 podman[245640]: 2026-01-23 10:10:13.334745311 +0000 UTC m=+0.026011608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:10:13 compute-0 podman[245640]: 2026-01-23 10:10:13.446895491 +0000 UTC m=+0.138161758 container start c13710c0e08c4be570b5ab4cf7a71ebce55fd77fdfffd33765ed6e39c807f582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_thompson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:10:13 compute-0 podman[245640]: 2026-01-23 10:10:13.450090353 +0000 UTC m=+0.141356640 container attach c13710c0e08c4be570b5ab4cf7a71ebce55fd77fdfffd33765ed6e39c807f582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:10:13 compute-0 python3.9[245672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769163012.3048592-3098-98439485827625/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=0333d3a3f5c3a0526b0ebe430250032166710e8a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 10:10:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:13.567Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:10:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:13.568Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:10:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:13.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:13 compute-0 nice_thompson[245677]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:10:13 compute-0 nice_thompson[245677]: --> All data devices are unavailable
Jan 23 10:10:13 compute-0 systemd[1]: libpod-c13710c0e08c4be570b5ab4cf7a71ebce55fd77fdfffd33765ed6e39c807f582.scope: Deactivated successfully.
Jan 23 10:10:13 compute-0 podman[245640]: 2026-01-23 10:10:13.814047323 +0000 UTC m=+0.505313600 container died c13710c0e08c4be570b5ab4cf7a71ebce55fd77fdfffd33765ed6e39c807f582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfe209b27a1325f7b6b51a0e5352e0296fb78879efe7afefcf2a990d7cebebcb-merged.mount: Deactivated successfully.
Jan 23 10:10:13 compute-0 podman[245640]: 2026-01-23 10:10:13.852417915 +0000 UTC m=+0.543684182 container remove c13710c0e08c4be570b5ab4cf7a71ebce55fd77fdfffd33765ed6e39c807f582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:10:13 compute-0 systemd[1]: libpod-conmon-c13710c0e08c4be570b5ab4cf7a71ebce55fd77fdfffd33765ed6e39c807f582.scope: Deactivated successfully.
Jan 23 10:10:13 compute-0 sudo[245383]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:13 compute-0 sudo[245729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:10:13 compute-0 sudo[245729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:13 compute-0 sudo[245729]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:13 compute-0 sudo[245758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:10:14 compute-0 sudo[245758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:14 compute-0 podman[245918]: 2026-01-23 10:10:14.37531099 +0000 UTC m=+0.039160966 container create a3394e7b44e85119e2dcf66c4f208667729a0b6201623709beaf865049c97ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mcnulty, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 10:10:14 compute-0 sudo[245956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpihhdtusdluukrsmhorwcgorgmtamft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163013.984021-3149-117780787446102/AnsiballZ_container_config_data.py'
Jan 23 10:10:14 compute-0 sudo[245956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:14 compute-0 systemd[1]: Started libpod-conmon-a3394e7b44e85119e2dcf66c4f208667729a0b6201623709beaf865049c97ba2.scope.
Jan 23 10:10:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:10:14 compute-0 podman[245918]: 2026-01-23 10:10:14.442923901 +0000 UTC m=+0.106773877 container init a3394e7b44e85119e2dcf66c4f208667729a0b6201623709beaf865049c97ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mcnulty, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:10:14 compute-0 podman[245918]: 2026-01-23 10:10:14.450563921 +0000 UTC m=+0.114413887 container start a3394e7b44e85119e2dcf66c4f208667729a0b6201623709beaf865049c97ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 10:10:14 compute-0 podman[245918]: 2026-01-23 10:10:14.358858687 +0000 UTC m=+0.022708683 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:10:14 compute-0 great_mcnulty[245962]: 167 167
Jan 23 10:10:14 compute-0 systemd[1]: libpod-a3394e7b44e85119e2dcf66c4f208667729a0b6201623709beaf865049c97ba2.scope: Deactivated successfully.
Jan 23 10:10:14 compute-0 podman[245918]: 2026-01-23 10:10:14.455506423 +0000 UTC m=+0.119356409 container attach a3394e7b44e85119e2dcf66c4f208667729a0b6201623709beaf865049c97ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mcnulty, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:10:14 compute-0 podman[245918]: 2026-01-23 10:10:14.456480861 +0000 UTC m=+0.120330857 container died a3394e7b44e85119e2dcf66c4f208667729a0b6201623709beaf865049c97ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mcnulty, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-132c3f27c75427b365480febceea2913fa3fa12722601e68e4bc31963d8a0dea-merged.mount: Deactivated successfully.
Jan 23 10:10:14 compute-0 podman[245918]: 2026-01-23 10:10:14.501426131 +0000 UTC m=+0.165276107 container remove a3394e7b44e85119e2dcf66c4f208667729a0b6201623709beaf865049c97ba2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 23 10:10:14 compute-0 systemd[1]: libpod-conmon-a3394e7b44e85119e2dcf66c4f208667729a0b6201623709beaf865049c97ba2.scope: Deactivated successfully.
Jan 23 10:10:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:14 compute-0 python3.9[245959]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 23 10:10:14 compute-0 sudo[245956]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:14 compute-0 podman[245986]: 2026-01-23 10:10:14.671936987 +0000 UTC m=+0.040413051 container create 0a1a27ce19d33debb031d9d72b19bfd34ef01702201565cca77fe4cdcda9cc95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 10:10:14 compute-0 systemd[1]: Started libpod-conmon-0a1a27ce19d33debb031d9d72b19bfd34ef01702201565cca77fe4cdcda9cc95.scope.
Jan 23 10:10:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9843c25f450cc8ee44974263350a46b8886206ec45c3b58d46c2d7a870dc03d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9843c25f450cc8ee44974263350a46b8886206ec45c3b58d46c2d7a870dc03d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9843c25f450cc8ee44974263350a46b8886206ec45c3b58d46c2d7a870dc03d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9843c25f450cc8ee44974263350a46b8886206ec45c3b58d46c2d7a870dc03d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:14 compute-0 podman[245986]: 2026-01-23 10:10:14.654235539 +0000 UTC m=+0.022711633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:10:14 compute-0 podman[245986]: 2026-01-23 10:10:14.758712929 +0000 UTC m=+0.127189023 container init 0a1a27ce19d33debb031d9d72b19bfd34ef01702201565cca77fe4cdcda9cc95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendeleev, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:10:14 compute-0 podman[245986]: 2026-01-23 10:10:14.766028729 +0000 UTC m=+0.134504793 container start 0a1a27ce19d33debb031d9d72b19bfd34ef01702201565cca77fe4cdcda9cc95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendeleev, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:10:14 compute-0 podman[245986]: 2026-01-23 10:10:14.769769446 +0000 UTC m=+0.138245520 container attach 0a1a27ce19d33debb031d9d72b19bfd34ef01702201565cca77fe4cdcda9cc95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:10:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b600025b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]: {
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:     "1": [
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:         {
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "devices": [
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "/dev/loop3"
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             ],
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "lv_name": "ceph_lv0",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "lv_size": "21470642176",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "name": "ceph_lv0",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "tags": {
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.cluster_name": "ceph",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.crush_device_class": "",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.encrypted": "0",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.osd_id": "1",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.type": "block",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.vdo": "0",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:                 "ceph.with_tpm": "0"
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             },
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "type": "block",
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:             "vg_name": "ceph_vg0"
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:         }
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]:     ]
Jan 23 10:10:15 compute-0 peaceful_mendeleev[246026]: }
Jan 23 10:10:15 compute-0 systemd[1]: libpod-0a1a27ce19d33debb031d9d72b19bfd34ef01702201565cca77fe4cdcda9cc95.scope: Deactivated successfully.
Jan 23 10:10:15 compute-0 podman[246075]: 2026-01-23 10:10:15.113908668 +0000 UTC m=+0.029934210 container died 0a1a27ce19d33debb031d9d72b19bfd34ef01702201565cca77fe4cdcda9cc95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 10:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9843c25f450cc8ee44974263350a46b8886206ec45c3b58d46c2d7a870dc03d4-merged.mount: Deactivated successfully.
Jan 23 10:10:15 compute-0 podman[246075]: 2026-01-23 10:10:15.155991306 +0000 UTC m=+0.072016818 container remove 0a1a27ce19d33debb031d9d72b19bfd34ef01702201565cca77fe4cdcda9cc95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:10:15 compute-0 systemd[1]: libpod-conmon-0a1a27ce19d33debb031d9d72b19bfd34ef01702201565cca77fe4cdcda9cc95.scope: Deactivated successfully.
Jan 23 10:10:15 compute-0 sudo[245758]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:10:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:15.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:10:15 compute-0 sudo[246100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:10:15 compute-0 sudo[246100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:15 compute-0 sudo[246100]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:15 compute-0 sudo[246129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:10:15 compute-0 sudo[246129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:15 compute-0 ceph-mon[74335]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:15 compute-0 sudo[246223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofngmjilppszrraujxoihaeicwtyfvpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163015.0359993-3182-67798269936640/AnsiballZ_container_config_hash.py'
Jan 23 10:10:15 compute-0 sudo[246223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:15 compute-0 python3.9[246225]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 10:10:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:15.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:15 compute-0 sudo[246223]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:15 compute-0 podman[246266]: 2026-01-23 10:10:15.707631086 +0000 UTC m=+0.038815725 container create b12001aa47079be98a40dd95ae177b1c4fd52a3a3106f148951167e48a7e8fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euler, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Jan 23 10:10:15 compute-0 systemd[1]: Started libpod-conmon-b12001aa47079be98a40dd95ae177b1c4fd52a3a3106f148951167e48a7e8fc6.scope.
Jan 23 10:10:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:10:15 compute-0 podman[246266]: 2026-01-23 10:10:15.782784074 +0000 UTC m=+0.113968713 container init b12001aa47079be98a40dd95ae177b1c4fd52a3a3106f148951167e48a7e8fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euler, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:10:15 compute-0 podman[246266]: 2026-01-23 10:10:15.689189797 +0000 UTC m=+0.020374466 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:10:15 compute-0 podman[246266]: 2026-01-23 10:10:15.789055824 +0000 UTC m=+0.120240463 container start b12001aa47079be98a40dd95ae177b1c4fd52a3a3106f148951167e48a7e8fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 23 10:10:15 compute-0 optimistic_euler[246303]: 167 167
Jan 23 10:10:15 compute-0 systemd[1]: libpod-b12001aa47079be98a40dd95ae177b1c4fd52a3a3106f148951167e48a7e8fc6.scope: Deactivated successfully.
Jan 23 10:10:15 compute-0 podman[246266]: 2026-01-23 10:10:15.792497613 +0000 UTC m=+0.123682282 container attach b12001aa47079be98a40dd95ae177b1c4fd52a3a3106f148951167e48a7e8fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:10:15 compute-0 podman[246266]: 2026-01-23 10:10:15.79515002 +0000 UTC m=+0.126334669 container died b12001aa47079be98a40dd95ae177b1c4fd52a3a3106f148951167e48a7e8fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 23 10:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f32faba4ee5bdaf0c008b801c899ec91d71bc0345a439fe0c3b9f0de90b0328-merged.mount: Deactivated successfully.
Jan 23 10:10:15 compute-0 podman[246266]: 2026-01-23 10:10:15.836458166 +0000 UTC m=+0.167642805 container remove b12001aa47079be98a40dd95ae177b1c4fd52a3a3106f148951167e48a7e8fc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 23 10:10:15 compute-0 systemd[1]: libpod-conmon-b12001aa47079be98a40dd95ae177b1c4fd52a3a3106f148951167e48a7e8fc6.scope: Deactivated successfully.
Jan 23 10:10:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:15 compute-0 podman[246330]: 2026-01-23 10:10:15.997950343 +0000 UTC m=+0.044456658 container create 0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 10:10:16 compute-0 systemd[1]: Started libpod-conmon-0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e.scope.
Jan 23 10:10:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:10:16 compute-0 podman[246330]: 2026-01-23 10:10:15.977728132 +0000 UTC m=+0.024234467 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2885287363d956f2c3742e78689c9bba9924ec7eac313b41ff9691722bb85d16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2885287363d956f2c3742e78689c9bba9924ec7eac313b41ff9691722bb85d16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2885287363d956f2c3742e78689c9bba9924ec7eac313b41ff9691722bb85d16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2885287363d956f2c3742e78689c9bba9924ec7eac313b41ff9691722bb85d16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:16 compute-0 podman[246330]: 2026-01-23 10:10:16.091858799 +0000 UTC m=+0.138365134 container init 0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:10:16 compute-0 podman[246330]: 2026-01-23 10:10:16.102033061 +0000 UTC m=+0.148539376 container start 0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chaplygin, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 10:10:16 compute-0 podman[246330]: 2026-01-23 10:10:16.105680326 +0000 UTC m=+0.152186671 container attach 0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chaplygin, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:10:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:10:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:16 compute-0 sudo[246539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlqphxnryhyqnmtrjnivmvtrcgxlcwuk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769163016.0408714-3212-176079569729462/AnsiballZ_edpm_container_manage.py'
Jan 23 10:10:16 compute-0 sudo[246539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:16 compute-0 lvm[246550]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:10:16 compute-0 ceph-mon[74335]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:10:16 compute-0 lvm[246550]: VG ceph_vg0 finished
Jan 23 10:10:16 compute-0 relaxed_chaplygin[246369]: {}
Jan 23 10:10:16 compute-0 systemd[1]: libpod-0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e.scope: Deactivated successfully.
Jan 23 10:10:16 compute-0 systemd[1]: libpod-0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e.scope: Consumed 1.137s CPU time.
Jan 23 10:10:16 compute-0 podman[246330]: 2026-01-23 10:10:16.837509529 +0000 UTC m=+0.884015844 container died 0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chaplygin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 10:10:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2885287363d956f2c3742e78689c9bba9924ec7eac313b41ff9691722bb85d16-merged.mount: Deactivated successfully.
Jan 23 10:10:16 compute-0 podman[246330]: 2026-01-23 10:10:16.882651705 +0000 UTC m=+0.929158020 container remove 0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 10:10:16 compute-0 systemd[1]: libpod-conmon-0be40294f0fc56f67e55694dea15258f6c74f68a04be49701808bc4f2cf5c23e.scope: Deactivated successfully.
Jan 23 10:10:16 compute-0 sudo[246129]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:10:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:16 compute-0 python3[246543]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 10:10:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:10:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:17.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:10:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:17 compute-0 sudo[246593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:10:17 compute-0 sudo[246593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:17 compute-0 sudo[246593]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:17.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:17.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:18 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:18 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:10:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:19 compute-0 ceph-mon[74335]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:19.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:19.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:10:19
Jan 23 10:10:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:10:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:10:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'backups', 'volumes', '.nfs', 'cephfs.cephfs.data', 'images', 'default.rgw.log', '.rgw.root']
Jan 23 10:10:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:10:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:19] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:10:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:19] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:10:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:10:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:10:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:10:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:21.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:21 compute-0 ceph-mon[74335]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:21 compute-0 sudo[246660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:10:21 compute-0 sudo[246660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:21 compute-0 sudo[246660]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:21.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:22 compute-0 podman[246648]: 2026-01-23 10:10:22.706523313 +0000 UTC m=+2.222451197 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 10:10:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:23.570Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:10:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:23.571Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:10:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:23.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b580008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:25.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:10:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b580008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:27.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:10:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:27.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:10:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:27.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:10:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:27.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:29 compute-0 ceph-mon[74335]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:29 compute-0 podman[246580]: 2026-01-23 10:10:29.465781249 +0000 UTC m=+12.458157056 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Jan 23 10:10:29 compute-0 podman[246754]: 2026-01-23 10:10:29.617372212 +0000 UTC m=+0.052797927 container create 8bb2b340aee9aaa97f7ca83d13eb4f7fb24f714a49c3eeb6a3ace4a28aac7d35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 23 10:10:29 compute-0 podman[246754]: 2026-01-23 10:10:29.587811943 +0000 UTC m=+0.023237678 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Jan 23 10:10:29 compute-0 python3[246543]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 23 10:10:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:29.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:29 compute-0 sudo[246539]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:29] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:10:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:29] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:10:30 compute-0 sudo[246943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewntqwykojmasluduxqkfpdrscrmdiuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163029.939224-3236-1989570108094/AnsiballZ_stat.py'
Jan 23 10:10:30 compute-0 sudo[246943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:30 compute-0 ceph-mon[74335]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:30 compute-0 ceph-mon[74335]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:10:30 compute-0 ceph-mon[74335]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:30 compute-0 python3.9[246945]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:10:30 compute-0 sudo[246943]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:31 compute-0 sudo[247098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxxdqbagrhqgpdhdfbpvhgvlspimckyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163031.1660485-3272-161723356817759/AnsiballZ_container_config_data.py'
Jan 23 10:10:31 compute-0 sudo[247098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:31 compute-0 ceph-mon[74335]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:31 compute-0 python3.9[247100]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 23 10:10:31 compute-0 sudo[247098]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:31.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:32 compute-0 sudo[247251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oztjhivadkyrxsbpcfsmfaildvovmipo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163032.0283885-3305-47039159300758/AnsiballZ_container_config_hash.py'
Jan 23 10:10:32 compute-0 sudo[247251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:32 compute-0 python3.9[247253]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 10:10:32 compute-0 sudo[247251]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b580008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:33 compute-0 sudo[247404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pglyosmyfkuqaldvigoydzaepzjaodma ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769163032.869431-3335-154015901470639/AnsiballZ_edpm_container_manage.py'
Jan 23 10:10:33 compute-0 sudo[247404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:33 compute-0 python3[247406]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 10:10:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:33.572Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:10:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:33.573Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:10:33 compute-0 podman[247443]: 2026-01-23 10:10:33.600162314 +0000 UTC m=+0.022410804 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Jan 23 10:10:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:33.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:33 compute-0 podman[247443]: 2026-01-23 10:10:33.808109415 +0000 UTC m=+0.230357875 container create 955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute)
Jan 23 10:10:33 compute-0 python3[247406]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b kolla_start
Jan 23 10:10:33 compute-0 sudo[247404]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b580008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:10:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:10:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:35.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:35.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b580008d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:37.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:10:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:10:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:37.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:10:37 compute-0 podman[247511]: 2026-01-23 10:10:37.539419357 +0000 UTC m=+0.065354418 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 23 10:10:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:37.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400a9e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:39.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:39] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:10:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:39] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:10:40 compute-0 ceph-mon[74335]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:40 compute-0 sudo[247661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzdpxvkqfsoyhwnsfcwoytryhunpmsoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163040.5721343-3359-186218331008989/AnsiballZ_stat.py'
Jan 23 10:10:40 compute-0 sudo[247661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:41 compute-0 python3.9[247663]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:10:41 compute-0 sudo[247661]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:41.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:41 compute-0 sudo[247763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:10:41 compute-0 sudo[247763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:10:41 compute-0 sudo[247763]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:41 compute-0 sudo[247841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhhkknsvrnaaijbgkpufdtjjxuwbhfdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163041.3552413-3386-218271330822265/AnsiballZ_file.py'
Jan 23 10:10:41 compute-0 sudo[247841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:41.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:41 compute-0 python3.9[247843]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:10:41 compute-0 sudo[247841]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:42 compute-0 ceph-mon[74335]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:42 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:10:42 compute-0 ceph-mon[74335]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:10:42 compute-0 ceph-mon[74335]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:42 compute-0 ceph-mon[74335]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:42 compute-0 sudo[247994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwigdupzzcoizgisruhpqnpjkrdebjgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163041.987058-3386-4711426867299/AnsiballZ_copy.py'
Jan 23 10:10:42 compute-0 sudo[247994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:42 compute-0 python3.9[247996]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769163041.987058-3386-4711426867299/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 10:10:42 compute-0 sudo[247994]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:42 compute-0 sudo[248070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjxxrhbzbqenyaejdcsqoyrvrfofaess ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163041.987058-3386-4711426867299/AnsiballZ_systemd.py'
Jan 23 10:10:42 compute-0 sudo[248070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:43 compute-0 python3.9[248072]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 10:10:43 compute-0 systemd[1]: Reloading.
Jan 23 10:10:43 compute-0 systemd-rc-local-generator[248098]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:10:43 compute-0 systemd-sysv-generator[248101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:10:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:43.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:43 compute-0 sudo[248070]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:43.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:10:43 compute-0 sudo[248181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrlofvwokjfyvtrqxlnypdwlozamtltk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163041.987058-3386-4711426867299/AnsiballZ_systemd.py'
Jan 23 10:10:43 compute-0 sudo[248181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:43.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:43 compute-0 python3.9[248183]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 10:10:44 compute-0 systemd[1]: Reloading.
Jan 23 10:10:44 compute-0 systemd-rc-local-generator[248213]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 10:10:44 compute-0 systemd-sysv-generator[248216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 10:10:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:44 compute-0 systemd[1]: Starting nova_compute container...
Jan 23 10:10:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:44 compute-0 podman[248224]: 2026-01-23 10:10:44.468584921 +0000 UTC m=+0.093855736 container init 955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 23 10:10:44 compute-0 podman[248224]: 2026-01-23 10:10:44.478768843 +0000 UTC m=+0.104039638 container start 955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:10:44 compute-0 podman[248224]: nova_compute
Jan 23 10:10:44 compute-0 nova_compute[248239]: + sudo -E kolla_set_configs
Jan 23 10:10:44 compute-0 systemd[1]: Started nova_compute container.
Jan 23 10:10:44 compute-0 sudo[248181]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Validating config file
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying service configuration files
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Deleting /etc/ceph
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Creating directory /etc/ceph
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/ceph
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Writing out command to execute
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 10:10:44 compute-0 nova_compute[248239]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 10:10:44 compute-0 nova_compute[248239]: ++ cat /run_command
Jan 23 10:10:44 compute-0 nova_compute[248239]: + CMD=nova-compute
Jan 23 10:10:44 compute-0 nova_compute[248239]: + ARGS=
Jan 23 10:10:44 compute-0 nova_compute[248239]: + sudo kolla_copy_cacerts
Jan 23 10:10:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101044 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:10:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:44 compute-0 nova_compute[248239]: + [[ ! -n '' ]]
Jan 23 10:10:44 compute-0 nova_compute[248239]: + . kolla_extend_start
Jan 23 10:10:44 compute-0 nova_compute[248239]: Running command: 'nova-compute'
Jan 23 10:10:44 compute-0 nova_compute[248239]: + echo 'Running command: '\''nova-compute'\'''
Jan 23 10:10:44 compute-0 nova_compute[248239]: + umask 0022
Jan 23 10:10:44 compute-0 nova_compute[248239]: + exec nova-compute
Jan 23 10:10:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:45 compute-0 ceph-mon[74335]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:45.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:45.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:46 compute-0 ceph-mon[74335]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:46 compute-0 nova_compute[248239]: 2026-01-23 10:10:46.773 248243 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 10:10:46 compute-0 nova_compute[248239]: 2026-01-23 10:10:46.774 248243 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 10:10:46 compute-0 nova_compute[248239]: 2026-01-23 10:10:46.774 248243 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 10:10:46 compute-0 nova_compute[248239]: 2026-01-23 10:10:46.774 248243 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 23 10:10:46 compute-0 nova_compute[248239]: 2026-01-23 10:10:46.923 248243 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:10:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:46 compute-0 nova_compute[248239]: 2026-01-23 10:10:46.937 248243 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:10:46 compute-0 nova_compute[248239]: 2026-01-23 10:10:46.938 248243 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 23 10:10:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:47.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:10:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:10:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:47.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:10:47 compute-0 ceph-mon[74335]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:47 compute-0 nova_compute[248239]: 2026-01-23 10:10:47.664 248243 INFO nova.virt.driver [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 23 10:10:47 compute-0 python3.9[248407]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:10:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:47.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:47 compute-0 nova_compute[248239]: 2026-01-23 10:10:47.859 248243 INFO nova.compute.provider_config [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.046 248243 DEBUG oslo_concurrency.lockutils [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.046 248243 DEBUG oslo_concurrency.lockutils [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.046 248243 DEBUG oslo_concurrency.lockutils [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.047 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.047 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.047 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.047 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.048 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.048 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.048 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.048 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.049 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.049 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.049 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.049 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.049 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.049 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.050 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.050 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.050 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.050 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.050 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.051 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.051 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.051 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.051 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.052 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.052 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.052 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.052 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.052 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.053 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.053 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.053 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.053 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.053 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.054 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.054 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.054 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.054 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.054 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.054 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.055 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.055 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.055 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.055 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.055 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.056 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.056 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.056 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.056 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.056 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.056 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.056 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.057 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.057 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.057 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.057 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.058 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.058 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.058 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.058 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.058 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.058 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.058 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.059 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.059 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.059 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.059 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.059 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.059 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.060 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.060 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.060 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.060 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.060 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.060 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.061 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.061 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.061 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.061 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.061 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.061 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.062 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.062 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.062 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.062 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.062 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.062 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.062 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.063 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.063 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.063 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.063 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.064 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.064 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.064 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.064 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.064 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.064 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.065 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.065 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.065 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.065 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.065 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.065 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.065 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.066 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.066 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.066 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.066 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.066 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.066 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.066 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.067 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.067 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.067 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.067 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.067 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.068 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.068 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.068 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.068 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.068 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.068 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.069 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.069 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.069 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.069 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.069 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.069 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.069 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.069 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.070 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.070 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.070 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.070 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.070 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.070 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.070 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.071 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.071 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.071 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.071 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.071 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.071 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.071 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.072 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.072 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.072 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.072 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.072 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.072 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.073 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.073 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.073 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.073 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.073 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.074 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.075 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.075 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.075 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.075 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.075 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.076 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.076 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.076 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.076 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.076 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.076 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.077 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.077 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.077 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.077 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.077 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.077 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.078 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.078 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.078 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.078 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.078 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.078 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.078 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.079 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.079 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.079 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.079 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.079 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.079 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.080 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.080 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.080 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.080 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.080 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.080 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.080 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.080 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.081 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.081 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.081 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.081 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.081 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.081 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.082 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.082 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.082 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.082 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.082 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.082 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.082 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.082 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.083 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.083 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.083 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.083 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.083 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.083 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.084 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.084 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.084 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.084 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.084 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.084 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.084 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.085 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.085 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.085 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.085 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.085 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.085 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.085 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.086 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.086 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.086 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.086 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.086 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.087 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.087 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.087 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.087 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.087 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.088 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.088 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.088 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.088 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.088 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.089 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.089 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.089 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.089 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.089 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.090 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.090 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.090 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.090 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.090 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.090 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.091 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.091 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.091 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.091 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.091 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.091 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.091 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.092 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.092 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.092 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.092 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.092 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.092 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.093 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.093 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.093 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.093 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.093 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.093 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.093 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.094 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.094 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.094 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.094 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.094 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.094 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.094 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.095 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.095 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.095 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.095 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.095 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.095 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.095 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.096 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.096 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.096 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.096 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.096 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.096 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.096 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.097 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.097 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.097 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.097 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.097 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.097 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.097 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.098 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.098 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.098 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.098 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.098 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.098 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.098 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.099 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.099 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.099 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.099 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.099 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.099 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.100 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.100 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.100 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.100 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.100 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.100 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.100 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.101 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.101 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.101 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.101 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.101 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.102 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.102 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.102 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.102 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.102 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.102 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.102 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.103 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.103 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.103 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.103 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.103 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.103 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.104 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.104 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.104 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.104 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.104 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.104 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.104 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.105 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.105 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.105 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.105 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.105 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.105 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.105 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.106 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.106 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.106 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.106 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.107 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.107 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.107 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.107 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.107 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.107 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.108 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.108 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.108 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.108 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.108 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.109 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.109 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.109 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.109 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.109 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.109 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.109 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.110 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.110 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.110 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.110 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.110 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.110 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.110 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.111 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.111 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.111 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.111 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.111 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.111 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.112 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.112 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.112 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.112 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.112 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.112 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.112 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.113 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.113 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.113 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.113 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.113 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.113 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.114 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.114 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.114 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.114 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.114 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.114 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.114 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.115 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.115 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.115 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.115 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.115 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.116 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.116 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.116 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.116 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.116 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.116 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.117 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.117 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.117 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.117 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.117 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.117 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.117 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.118 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.118 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.118 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.118 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.118 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.118 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.119 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.119 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.119 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.119 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.120 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.121 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.121 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.121 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.121 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.121 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.121 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.122 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.122 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.122 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.122 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.122 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.123 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.123 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.123 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.123 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.123 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.123 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.124 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.124 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.124 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.124 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.124 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.124 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.124 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.125 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.125 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.125 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.125 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.125 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.125 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.126 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.126 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.126 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.126 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.126 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.126 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.127 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.127 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.127 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.127 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.127 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.127 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.128 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.128 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.128 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.128 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.128 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.128 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.128 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.129 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.129 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.129 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.129 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.129 248243 WARNING oslo_config.cfg [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 23 10:10:48 compute-0 nova_compute[248239]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 23 10:10:48 compute-0 nova_compute[248239]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 23 10:10:48 compute-0 nova_compute[248239]: and ``live_migration_inbound_addr`` respectively.
Jan 23 10:10:48 compute-0 nova_compute[248239]: ).  Its value may be silently ignored in the future.
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.130 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.130 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.130 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.130 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.130 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.130 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.131 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.131 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.131 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.131 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.131 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.131 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.131 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.132 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.132 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.132 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.132 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.132 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.132 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rbd_secret_uuid        = f3005f84-239a-55b6-a948-8f1fb592b920 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.133 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.133 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.133 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.133 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.133 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.133 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.134 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.134 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.134 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.134 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.134 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.134 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.134 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.135 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.135 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.135 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.135 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.135 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.135 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.136 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.136 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.136 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.136 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.136 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.136 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.137 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.137 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.137 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.137 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.137 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.138 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.138 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.138 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.138 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.138 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.138 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.139 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.139 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.139 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.139 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.139 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.139 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.140 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.140 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.140 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.140 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.140 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.140 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.141 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.141 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.141 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.141 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.141 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.141 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.142 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.142 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.142 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.142 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.142 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.142 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.142 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.143 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.143 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.143 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.143 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.143 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.143 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.144 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.144 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.144 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.144 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.144 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.144 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.145 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.145 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.145 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.145 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.145 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.145 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.145 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.146 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.146 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.146 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.146 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.146 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.146 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.146 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.147 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.147 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.147 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.147 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.147 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.147 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.148 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.148 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.148 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.148 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.148 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.148 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.149 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.149 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.149 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.149 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.149 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.149 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.150 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.150 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.150 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.150 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.150 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.151 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.151 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.151 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.151 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.151 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.151 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.152 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.152 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.152 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.152 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.153 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.153 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.153 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.153 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.153 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.153 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.154 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.154 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.154 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.154 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.154 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.155 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.155 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.155 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.155 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.155 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.156 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.156 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.156 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.156 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.156 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.157 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.157 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.157 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.157 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.157 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.158 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.158 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.158 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.158 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.158 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.159 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.159 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.159 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.159 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.159 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.160 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.160 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.160 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.160 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.161 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.161 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.161 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.161 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.162 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.162 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.162 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.162 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.162 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.162 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.163 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.163 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.163 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.163 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.163 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.163 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.164 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.164 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.164 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.164 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.165 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.165 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.165 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.165 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.165 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.165 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.166 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.166 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.166 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.166 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.166 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.167 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.167 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.167 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.167 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.167 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.167 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.167 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.168 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.168 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.168 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.168 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.168 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.168 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.169 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.169 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.169 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.169 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.169 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.169 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.169 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.170 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.170 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.170 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.170 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.170 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.170 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.170 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.171 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.171 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.171 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.171 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.171 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.171 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.171 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.172 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.172 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.172 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.172 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.173 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.173 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.173 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.173 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.173 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.173 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.174 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.174 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.174 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.174 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.174 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.175 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.175 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.175 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.175 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.175 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.176 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.176 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.176 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.176 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.176 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.176 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.177 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.177 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.177 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.177 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.177 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.177 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.178 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.178 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.178 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.178 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.178 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.178 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.178 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.179 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.179 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.179 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.179 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.179 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.180 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.180 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.180 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.180 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.180 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.180 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.181 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.181 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.181 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.181 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.181 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.181 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.182 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.182 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.182 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.182 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.182 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.182 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.182 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.183 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.183 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.183 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.183 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.183 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.183 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.184 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.184 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.184 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.184 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.184 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.184 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.185 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.185 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.185 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.185 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.185 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.186 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.186 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.186 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.186 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.186 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.186 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.187 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.187 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.187 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.187 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.187 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.188 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.188 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.188 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.188 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.188 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.188 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.189 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.189 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.189 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.189 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.189 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.189 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.190 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.190 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.190 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.190 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.190 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.190 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.190 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.191 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.191 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.191 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.191 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.191 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.191 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.191 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.192 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.192 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.192 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.192 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.192 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.192 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.193 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.193 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.193 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.193 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.193 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.193 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.193 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.194 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.194 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.194 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.194 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.194 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.194 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.194 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.195 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.195 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.195 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.195 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.195 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.195 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.196 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.196 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.196 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.196 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.196 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.196 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.197 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.197 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.197 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.197 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.197 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.197 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.198 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.198 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.198 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.198 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.198 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.198 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.199 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.199 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.199 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.199 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.199 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.199 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.200 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.200 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.200 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.200 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.200 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.200 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.201 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.201 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.201 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.201 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.201 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.201 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.201 248243 DEBUG oslo_service.service [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.203 248243 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.277 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.278 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.279 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.279 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 23 10:10:48 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 23 10:10:48 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 23 10:10:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.356 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f3372bd1e20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.359 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f3372bd1e20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.361 248243 INFO nova.virt.libvirt.driver [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Connection event '1' reason 'None'
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.504 248243 WARNING nova.virt.libvirt.driver [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 23 10:10:48 compute-0 nova_compute[248239]: 2026-01-23 10:10:48.506 248243 DEBUG nova.virt.libvirt.volume.mount [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 23 10:10:48 compute-0 python3.9[248598]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:10:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:49 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400ab60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:49 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64002a80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:49.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:49 compute-0 python3.9[248769]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.446 248243 INFO nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Libvirt host capabilities <capabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]: 
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <host>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <uuid>f03a0360-43fd-4fa3-b498-9716505b3cad</uuid>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <arch>x86_64</arch>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model>EPYC-Rome-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <vendor>AMD</vendor>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <microcode version='16777317'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <signature family='23' model='49' stepping='0'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='x2apic'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='tsc-deadline'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='osxsave'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='hypervisor'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='tsc_adjust'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='spec-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='stibp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='arch-capabilities'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='cmp_legacy'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='topoext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='virt-ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='lbrv'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='tsc-scale'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='vmcb-clean'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='pause-filter'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='pfthreshold'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='svme-addr-chk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='rdctl-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='skip-l1dfl-vmentry'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='mds-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature name='pschange-mc-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <pages unit='KiB' size='4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <pages unit='KiB' size='2048'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <pages unit='KiB' size='1048576'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <power_management>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <suspend_mem/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </power_management>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <iommu support='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <migration_features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <live/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <uri_transports>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <uri_transport>tcp</uri_transport>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <uri_transport>rdma</uri_transport>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </uri_transports>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </migration_features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <topology>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <cells num='1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <cell id='0'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:           <memory unit='KiB'>7864316</memory>
Jan 23 10:10:49 compute-0 nova_compute[248239]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 23 10:10:49 compute-0 nova_compute[248239]:           <pages unit='KiB' size='2048'>0</pages>
Jan 23 10:10:49 compute-0 nova_compute[248239]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 23 10:10:49 compute-0 nova_compute[248239]:           <distances>
Jan 23 10:10:49 compute-0 nova_compute[248239]:             <sibling id='0' value='10'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:           </distances>
Jan 23 10:10:49 compute-0 nova_compute[248239]:           <cpus num='8'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:           </cpus>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         </cell>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </cells>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </topology>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <cache>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </cache>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <secmodel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model>selinux</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <doi>0</doi>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </secmodel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <secmodel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model>dac</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <doi>0</doi>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </secmodel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </host>
Jan 23 10:10:49 compute-0 nova_compute[248239]: 
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <guest>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <os_type>hvm</os_type>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <arch name='i686'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <wordsize>32</wordsize>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <domain type='qemu'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <domain type='kvm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </arch>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <pae/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <nonpae/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <acpi default='on' toggle='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <apic default='on' toggle='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <cpuselection/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <deviceboot/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <disksnapshot default='on' toggle='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <externalSnapshot/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </guest>
Jan 23 10:10:49 compute-0 nova_compute[248239]: 
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <guest>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <os_type>hvm</os_type>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <arch name='x86_64'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <wordsize>64</wordsize>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <domain type='qemu'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <domain type='kvm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </arch>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <acpi default='on' toggle='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <apic default='on' toggle='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <cpuselection/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <deviceboot/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <disksnapshot default='on' toggle='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <externalSnapshot/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </guest>
Jan 23 10:10:49 compute-0 nova_compute[248239]: 
Jan 23 10:10:49 compute-0 nova_compute[248239]: </capabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]: 
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.452 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.473 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 23 10:10:49 compute-0 nova_compute[248239]: <domainCapabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <domain>kvm</domain>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <arch>i686</arch>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <vcpu max='4096'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <iothreads supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <os supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <enum name='firmware'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <loader supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>rom</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pflash</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='readonly'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>yes</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>no</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='secure'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>no</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </loader>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </os>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='host-passthrough' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='hostPassthroughMigratable'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>on</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>off</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='maximum' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='maximumMigratable'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>on</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>off</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='host-model' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <vendor>AMD</vendor>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='x2apic'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='hypervisor'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='stibp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='overflow-recov'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='succor'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='lbrv'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc-scale'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='flushbyasid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='pause-filter'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='pfthreshold'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='disable' name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='custom' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='ClearwaterForest'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ddpd-u'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sha512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='ClearwaterForest-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ddpd-u'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sha512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Dhyana-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Turin'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbpb'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Turin-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbpb'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-128'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-256'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-128'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-256'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v6'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v7'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='KnightsMill'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512er'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512pf'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='KnightsMill-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512er'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512pf'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G4-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tbm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G5-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tbm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='athlon'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='athlon-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='core2duo'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='core2duo-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='coreduo'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='coreduo-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='n270'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='n270-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='phenom'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='phenom-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <memoryBacking supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <enum name='sourceType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>file</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>anonymous</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>memfd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </memoryBacking>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <devices>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <disk supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='diskDevice'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>disk</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>cdrom</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>floppy</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>lun</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='bus'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>fdc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>scsi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>sata</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-non-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </disk>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <graphics supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vnc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>egl-headless</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dbus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </graphics>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <video supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='modelType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vga</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>cirrus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>none</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>bochs</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ramfb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </video>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <hostdev supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='mode'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>subsystem</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='startupPolicy'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>default</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>mandatory</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>requisite</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>optional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='subsysType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pci</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>scsi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='capsType'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='pciBackend'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </hostdev>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <rng supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-non-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>random</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>egd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>builtin</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </rng>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <filesystem supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='driverType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>path</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>handle</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtiofs</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </filesystem>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <tpm supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tpm-tis</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tpm-crb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>emulator</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>external</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendVersion'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>2.0</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </tpm>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <redirdev supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='bus'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </redirdev>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <channel supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pty</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>unix</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </channel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <crypto supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>qemu</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>builtin</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </crypto>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <interface supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>default</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>passt</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </interface>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <panic supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>isa</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>hyperv</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </panic>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <console supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>null</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pty</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dev</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>file</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pipe</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>stdio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>udp</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tcp</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>unix</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>qemu-vdagent</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dbus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </console>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </devices>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <gic supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <vmcoreinfo supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <genid supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <backingStoreInput supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <backup supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <async-teardown supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <s390-pv supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <ps2 supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <tdx supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <sev supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <sgx supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <hyperv supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='features'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>relaxed</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vapic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>spinlocks</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vpindex</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>runtime</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>synic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>stimer</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>reset</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vendor_id</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>frequencies</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>reenlightenment</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tlbflush</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ipi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>avic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>emsr_bitmap</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>xmm_input</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <defaults>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <spinlocks>4095</spinlocks>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <stimer_direct>on</stimer_direct>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </defaults>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </hyperv>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <launchSecurity supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </features>
Jan 23 10:10:49 compute-0 nova_compute[248239]: </domainCapabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.479 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 23 10:10:49 compute-0 nova_compute[248239]: <domainCapabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <domain>kvm</domain>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <arch>i686</arch>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <vcpu max='240'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <iothreads supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <os supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <enum name='firmware'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <loader supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>rom</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pflash</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='readonly'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>yes</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>no</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='secure'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>no</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </loader>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </os>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='host-passthrough' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='hostPassthroughMigratable'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>on</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>off</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='maximum' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='maximumMigratable'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>on</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>off</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='host-model' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <vendor>AMD</vendor>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='x2apic'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='hypervisor'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='stibp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='overflow-recov'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='succor'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='lbrv'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc-scale'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='flushbyasid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='pause-filter'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='pfthreshold'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='disable' name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='custom' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='ClearwaterForest'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ddpd-u'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sha512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='ClearwaterForest-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ddpd-u'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sha512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Dhyana-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Turin'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbpb'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Turin-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbpb'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-128'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-256'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-128'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-256'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v6'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v7'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='KnightsMill'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512er'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512pf'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='KnightsMill-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512er'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512pf'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G4-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tbm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G5-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tbm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='athlon'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='athlon-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='core2duo'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='core2duo-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='coreduo'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='coreduo-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='n270'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='n270-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='phenom'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='phenom-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <memoryBacking supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <enum name='sourceType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>file</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>anonymous</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>memfd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </memoryBacking>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <devices>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <disk supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='diskDevice'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>disk</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>cdrom</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>floppy</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>lun</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='bus'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ide</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>fdc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>scsi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>sata</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-non-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </disk>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <graphics supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vnc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>egl-headless</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dbus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </graphics>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <video supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='modelType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vga</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>cirrus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>none</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>bochs</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ramfb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </video>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <hostdev supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='mode'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>subsystem</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='startupPolicy'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>default</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>mandatory</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>requisite</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>optional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='subsysType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pci</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>scsi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='capsType'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='pciBackend'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </hostdev>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <rng supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-non-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>random</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>egd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>builtin</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </rng>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <filesystem supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='driverType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>path</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>handle</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtiofs</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </filesystem>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <tpm supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tpm-tis</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tpm-crb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>emulator</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>external</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendVersion'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>2.0</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </tpm>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <redirdev supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='bus'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </redirdev>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <channel supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pty</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>unix</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </channel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <crypto supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>qemu</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>builtin</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </crypto>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <interface supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>default</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>passt</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </interface>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <panic supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>isa</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>hyperv</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </panic>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <console supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>null</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pty</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dev</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>file</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pipe</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>stdio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>udp</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tcp</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>unix</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>qemu-vdagent</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dbus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </console>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </devices>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <gic supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <vmcoreinfo supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <genid supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <backingStoreInput supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <backup supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <async-teardown supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <s390-pv supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <ps2 supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <tdx supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <sev supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <sgx supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <hyperv supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='features'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>relaxed</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vapic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>spinlocks</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vpindex</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>runtime</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>synic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>stimer</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>reset</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vendor_id</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>frequencies</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>reenlightenment</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tlbflush</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ipi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>avic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>emsr_bitmap</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>xmm_input</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <defaults>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <spinlocks>4095</spinlocks>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <stimer_direct>on</stimer_direct>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </defaults>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </hyperv>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <launchSecurity supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </features>
Jan 23 10:10:49 compute-0 nova_compute[248239]: </domainCapabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.531 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.536 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 23 10:10:49 compute-0 nova_compute[248239]: <domainCapabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <domain>kvm</domain>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <arch>x86_64</arch>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <vcpu max='240'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <iothreads supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <os supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <enum name='firmware'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <loader supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>rom</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pflash</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='readonly'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>yes</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>no</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='secure'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>no</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </loader>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </os>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='host-passthrough' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='hostPassthroughMigratable'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>on</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>off</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='maximum' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='maximumMigratable'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>on</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>off</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='host-model' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <vendor>AMD</vendor>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='x2apic'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='hypervisor'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='stibp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='overflow-recov'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='succor'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='lbrv'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc-scale'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='flushbyasid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='pause-filter'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='pfthreshold'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='disable' name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='custom' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='ClearwaterForest'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ddpd-u'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sha512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='ClearwaterForest-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ddpd-u'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sha512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Dhyana-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Turin'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbpb'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Turin-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbpb'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-128'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-256'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-128'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-256'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v6'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v7'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='KnightsMill'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512er'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512pf'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='KnightsMill-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512er'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512pf'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G4-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tbm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G5-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tbm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='athlon'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='athlon-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='core2duo'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='core2duo-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='coreduo'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='coreduo-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='n270'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='n270-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='phenom'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='phenom-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <memoryBacking supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <enum name='sourceType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>file</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>anonymous</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>memfd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </memoryBacking>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <devices>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <disk supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='diskDevice'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>disk</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>cdrom</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>floppy</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>lun</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='bus'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ide</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>fdc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>scsi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>sata</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-non-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </disk>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <graphics supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vnc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>egl-headless</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dbus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </graphics>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <video supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='modelType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vga</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>cirrus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>none</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>bochs</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ramfb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </video>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <hostdev supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='mode'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>subsystem</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='startupPolicy'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>default</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>mandatory</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>requisite</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>optional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='subsysType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pci</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>scsi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='capsType'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='pciBackend'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </hostdev>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <rng supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-non-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>random</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>egd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>builtin</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </rng>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <filesystem supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='driverType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>path</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>handle</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtiofs</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </filesystem>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <tpm supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tpm-tis</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tpm-crb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>emulator</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>external</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendVersion'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>2.0</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </tpm>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <redirdev supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='bus'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </redirdev>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <channel supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pty</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>unix</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </channel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <crypto supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>qemu</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>builtin</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </crypto>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <interface supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>default</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>passt</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </interface>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <panic supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>isa</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>hyperv</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </panic>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <console supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>null</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pty</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dev</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>file</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pipe</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>stdio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>udp</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tcp</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>unix</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>qemu-vdagent</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dbus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </console>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </devices>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <gic supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <vmcoreinfo supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <genid supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <backingStoreInput supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <backup supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <async-teardown supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <s390-pv supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <ps2 supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <tdx supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <sev supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <sgx supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <hyperv supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='features'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>relaxed</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vapic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>spinlocks</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vpindex</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>runtime</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>synic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>stimer</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>reset</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vendor_id</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>frequencies</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>reenlightenment</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tlbflush</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ipi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>avic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>emsr_bitmap</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>xmm_input</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <defaults>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <spinlocks>4095</spinlocks>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <stimer_direct>on</stimer_direct>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </defaults>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </hyperv>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <launchSecurity supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </features>
Jan 23 10:10:49 compute-0 nova_compute[248239]: </domainCapabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.632 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 23 10:10:49 compute-0 nova_compute[248239]: <domainCapabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <domain>kvm</domain>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <arch>x86_64</arch>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <vcpu max='4096'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <iothreads supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <os supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <enum name='firmware'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>efi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <loader supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>rom</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pflash</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='readonly'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>yes</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>no</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='secure'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>yes</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>no</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </loader>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </os>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='host-passthrough' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='hostPassthroughMigratable'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>on</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>off</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='maximum' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='maximumMigratable'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>on</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>off</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='host-model' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <vendor>AMD</vendor>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='x2apic'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='hypervisor'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='stibp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='overflow-recov'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='succor'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='lbrv'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='tsc-scale'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='flushbyasid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='pause-filter'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='pfthreshold'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <feature policy='disable' name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <mode name='custom' supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Broadwell-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='ClearwaterForest'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ddpd-u'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sha512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='ClearwaterForest-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ddpd-u'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sha512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm3'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sm4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Cooperlake-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Denverton-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Dhyana-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Milan-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Rome-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Turin'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbpb'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-Turin-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amd-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='auto-ibrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='perfmon-v2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbpb'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='stibp-always-on'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='EPYC-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-128'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-256'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='GraniteRapids-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-128'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-256'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx10-512'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='prefetchiti'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Haswell-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v6'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Icelake-Server-v7'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='IvyBridge-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='KnightsMill'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512er'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512pf'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='KnightsMill-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512er'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512pf'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G4-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tbm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Opteron_G5-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fma4'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tbm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xop'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SapphireRapids-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='amx-tile'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-bf16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-fp16'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bitalg'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrc'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fzrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='la57'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='taa-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:49.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='SierraForest-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ifma'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cmpccxadd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fbsdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='fsrs'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ibrs-all'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='intel-psfd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='lam'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mcdt-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pbrsb-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='psdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='serialize'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vaes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Client-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='hle'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='rtm'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Skylake-Server-v5'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512bw'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512cd'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512dq'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512f'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='avx512vl'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='invpcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pcid'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='pku'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 ceph-mon[74335]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='mpx'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v2'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v3'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='core-capability'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='split-lock-detect'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='Snowridge-v4'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='cldemote'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='erms'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='gfni'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdir64b'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='movdiri'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='xsaves'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='athlon'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='athlon-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='core2duo'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='core2duo-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='coreduo'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='coreduo-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='n270'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='n270-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='ss'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='phenom'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <blockers model='phenom-v1'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnow'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <feature name='3dnowext'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </blockers>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </mode>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </cpu>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <memoryBacking supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <enum name='sourceType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>file</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>anonymous</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <value>memfd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </memoryBacking>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <devices>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <disk supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='diskDevice'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>disk</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>cdrom</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>floppy</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>lun</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='bus'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>fdc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>scsi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>sata</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-non-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </disk>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <graphics supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vnc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>egl-headless</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dbus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </graphics>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <video supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='modelType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vga</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>cirrus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>none</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>bochs</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ramfb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </video>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <hostdev supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='mode'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>subsystem</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='startupPolicy'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>default</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>mandatory</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>requisite</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>optional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='subsysType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pci</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>scsi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='capsType'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='pciBackend'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </hostdev>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <rng supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtio-non-transitional</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>random</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>egd</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>builtin</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </rng>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <filesystem supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='driverType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>path</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>handle</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>virtiofs</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </filesystem>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <tpm supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tpm-tis</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tpm-crb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>emulator</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>external</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendVersion'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>2.0</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </tpm>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <redirdev supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='bus'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>usb</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </redirdev>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <channel supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pty</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>unix</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </channel>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <crypto supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>qemu</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendModel'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>builtin</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </crypto>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <interface supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='backendType'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>default</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>passt</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </interface>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <panic supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='model'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>isa</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>hyperv</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </panic>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <console supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='type'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>null</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vc</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pty</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dev</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>file</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>pipe</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>stdio</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>udp</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tcp</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>unix</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>qemu-vdagent</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>dbus</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </console>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </devices>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   <features>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <gic supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <vmcoreinfo supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <genid supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <backingStoreInput supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <backup supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <async-teardown supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <s390-pv supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <ps2 supported='yes'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <tdx supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <sev supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <sgx supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <hyperv supported='yes'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <enum name='features'>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>relaxed</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vapic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>spinlocks</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vpindex</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>runtime</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>synic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>stimer</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>reset</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>vendor_id</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>frequencies</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>reenlightenment</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>tlbflush</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>ipi</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>avic</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>emsr_bitmap</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <value>xmm_input</value>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </enum>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       <defaults>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <spinlocks>4095</spinlocks>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <stimer_direct>on</stimer_direct>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 10:10:49 compute-0 nova_compute[248239]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 10:10:49 compute-0 nova_compute[248239]:       </defaults>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     </hyperv>
Jan 23 10:10:49 compute-0 nova_compute[248239]:     <launchSecurity supported='no'/>
Jan 23 10:10:49 compute-0 nova_compute[248239]:   </features>
Jan 23 10:10:49 compute-0 nova_compute[248239]: </domainCapabilities>
Jan 23 10:10:49 compute-0 nova_compute[248239]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.720 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.721 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.721 248243 DEBUG nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.729 248243 INFO nova.virt.libvirt.host [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Secure Boot support detected
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.731 248243 INFO nova.virt.libvirt.driver [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.731 248243 INFO nova.virt.libvirt.driver [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.740 248243 DEBUG nova.virt.libvirt.driver [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.765 248243 INFO nova.virt.node [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Determined node identity a1f82a16-d7e7-4500-99d7-a20de995d9a2 from /var/lib/nova/compute_id
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.785 248243 WARNING nova.compute.manager [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Compute nodes ['a1f82a16-d7e7-4500-99d7-a20de995d9a2'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.814 248243 INFO nova.compute.manager [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.907 248243 WARNING nova.compute.manager [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.907 248243 DEBUG oslo_concurrency.lockutils [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.908 248243 DEBUG oslo_concurrency.lockutils [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.908 248243 DEBUG oslo_concurrency.lockutils [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.908 248243 DEBUG nova.compute.resource_tracker [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:10:49 compute-0 nova_compute[248239]: 2026-01-23 10:10:49.909 248243 DEBUG oslo_concurrency.processutils [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:10:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:49] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:10:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:49] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:10:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:10:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:10:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:10:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:10:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:10:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:10:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:10:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:10:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:10:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738253789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:50 compute-0 nova_compute[248239]: 2026-01-23 10:10:50.366 248243 DEBUG oslo_concurrency.processutils [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:10:50 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 23 10:10:50 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 23 10:10:50 compute-0 sudo[248968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjcaxamyzefbsuiafaqszpdyweatewtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163049.953901-3566-168473701114916/AnsiballZ_podman_container.py'
Jan 23 10:10:50 compute-0 sudo[248968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58003a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:50 compute-0 nova_compute[248239]: 2026-01-23 10:10:50.670 248243 WARNING nova.virt.libvirt.driver [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:10:50 compute-0 nova_compute[248239]: 2026-01-23 10:10:50.672 248243 DEBUG nova.compute.resource_tracker [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4917MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:10:50 compute-0 nova_compute[248239]: 2026-01-23 10:10:50.672 248243 DEBUG oslo_concurrency.lockutils [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:10:50 compute-0 nova_compute[248239]: 2026-01-23 10:10:50.672 248243 DEBUG oslo_concurrency.lockutils [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:10:50 compute-0 nova_compute[248239]: 2026-01-23 10:10:50.687 248243 WARNING nova.compute.resource_tracker [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] No compute node record for compute-0.ctlplane.example.com:a1f82a16-d7e7-4500-99d7-a20de995d9a2: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host a1f82a16-d7e7-4500-99d7-a20de995d9a2 could not be found.
Jan 23 10:10:50 compute-0 nova_compute[248239]: 2026-01-23 10:10:50.709 248243 INFO nova.compute.resource_tracker [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: a1f82a16-d7e7-4500-99d7-a20de995d9a2
Jan 23 10:10:50 compute-0 python3.9[248970]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 23 10:10:50 compute-0 nova_compute[248239]: 2026-01-23 10:10:50.782 248243 DEBUG nova.compute.resource_tracker [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:10:50 compute-0 nova_compute[248239]: 2026-01-23 10:10:50.783 248243 DEBUG nova.compute.resource_tracker [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:10:50 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:10:50 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:10:50 compute-0 sudo[248968]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:51 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:51 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400ab60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:51.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:51 compute-0 sudo[249144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kslyhkmjyxtmrmresutcqbvixbrhvlnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163051.1820438-3590-184661510510242/AnsiballZ_systemd.py'
Jan 23 10:10:51 compute-0 sudo[249144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:51 compute-0 nova_compute[248239]: 2026-01-23 10:10:51.704 248243 INFO nova.scheduler.client.report [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] [req-44d34c39-7fb8-4818-8894-c8772c465135] Created resource provider record via placement API for resource provider with UUID a1f82a16-d7e7-4500-99d7-a20de995d9a2 and name compute-0.ctlplane.example.com.
Jan 23 10:10:51 compute-0 nova_compute[248239]: 2026-01-23 10:10:51.722 248243 DEBUG oslo_concurrency.processutils [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:10:51 compute-0 python3.9[249146]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 10:10:51 compute-0 systemd[1]: Stopping nova_compute container...
Jan 23 10:10:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:51.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:51 compute-0 nova_compute[248239]: 2026-01-23 10:10:51.838 248243 DEBUG oslo_concurrency.lockutils [None req-e2dacc56-f00b-4e33-8ceb-5b2200b88116 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:10:51 compute-0 nova_compute[248239]: 2026-01-23 10:10:51.839 248243 DEBUG oslo_concurrency.lockutils [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:10:51 compute-0 nova_compute[248239]: 2026-01-23 10:10:51.839 248243 DEBUG oslo_concurrency.lockutils [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:10:51 compute-0 nova_compute[248239]: 2026-01-23 10:10:51.839 248243 DEBUG oslo_concurrency.lockutils [None req-fbd507b1-3c0a-4fbd-968a-13c0ae5f6004 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:10:52 compute-0 virtqemud[248554]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 23 10:10:52 compute-0 systemd[1]: libpod-955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f.scope: Deactivated successfully.
Jan 23 10:10:52 compute-0 virtqemud[248554]: hostname: compute-0
Jan 23 10:10:52 compute-0 virtqemud[248554]: End of file while reading data: Input/output error
Jan 23 10:10:52 compute-0 podman[249151]: 2026-01-23 10:10:52.310228176 +0000 UTC m=+0.518461258 container died 955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 10:10:52 compute-0 systemd[1]: libpod-955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f.scope: Consumed 4.086s CPU time.
Jan 23 10:10:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f-userdata-shm.mount: Deactivated successfully.
Jan 23 10:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010-merged.mount: Deactivated successfully.
Jan 23 10:10:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:53 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:53 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64002a80 fd 50 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:53.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:53.575Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:10:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:53.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:10:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:53.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:10:53 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/738253789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:53 compute-0 ceph-mon[74335]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:53 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1010858224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:53 compute-0 podman[249151]: 2026-01-23 10:10:53.918838186 +0000 UTC m=+2.127071268 container cleanup 955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:10:53 compute-0 podman[249151]: nova_compute
Jan 23 10:10:53 compute-0 podman[249201]: nova_compute
Jan 23 10:10:53 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 23 10:10:53 compute-0 systemd[1]: Stopped nova_compute container.
Jan 23 10:10:54 compute-0 systemd[1]: Starting nova_compute container...
Jan 23 10:10:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7533e253a2f2eaf0e94372f654b1b0e8b480b8ea04759637cb571ea93f127010/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:54 compute-0 podman[249214]: 2026-01-23 10:10:54.118649743 +0000 UTC m=+0.099592821 container init 955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:10:54 compute-0 podman[249214]: 2026-01-23 10:10:54.124958434 +0000 UTC m=+0.105901482 container start 955f9566d05798cdd71546732c1b5a8107d1262ca33e1e62353f97ae0dc2cc2f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 10:10:54 compute-0 podman[249214]: nova_compute
Jan 23 10:10:54 compute-0 nova_compute[249229]: + sudo -E kolla_set_configs
Jan 23 10:10:54 compute-0 systemd[1]: Started nova_compute container.
Jan 23 10:10:54 compute-0 sudo[249144]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Validating config file
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying service configuration files
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /etc/ceph
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Creating directory /etc/ceph
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/ceph
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Writing out command to execute
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 10:10:54 compute-0 nova_compute[249229]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 10:10:54 compute-0 nova_compute[249229]: ++ cat /run_command
Jan 23 10:10:54 compute-0 nova_compute[249229]: + CMD=nova-compute
Jan 23 10:10:54 compute-0 nova_compute[249229]: + ARGS=
Jan 23 10:10:54 compute-0 nova_compute[249229]: + sudo kolla_copy_cacerts
Jan 23 10:10:54 compute-0 nova_compute[249229]: + [[ ! -n '' ]]
Jan 23 10:10:54 compute-0 nova_compute[249229]: + . kolla_extend_start
Jan 23 10:10:54 compute-0 nova_compute[249229]: Running command: 'nova-compute'
Jan 23 10:10:54 compute-0 nova_compute[249229]: + echo 'Running command: '\''nova-compute'\'''
Jan 23 10:10:54 compute-0 nova_compute[249229]: + umask 0022
Jan 23 10:10:54 compute-0 nova_compute[249229]: + exec nova-compute
Jan 23 10:10:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400ab60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:54 compute-0 ceph-mon[74335]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:10:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1198102383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:54 compute-0 ceph-mon[74335]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:10:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:55 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:55 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 50 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:55 compute-0 sudo[249392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chadmclhsybpshnetcfznnhaaetskngb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769163054.7483034-3617-63994515210694/AnsiballZ_podman_container.py'
Jan 23 10:10:55 compute-0 sudo[249392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:10:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:55.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:55 compute-0 python3.9[249395]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 23 10:10:55 compute-0 systemd[1]: Started libpod-conmon-8bb2b340aee9aaa97f7ca83d13eb4f7fb24f714a49c3eeb6a3ace4a28aac7d35.scope.
Jan 23 10:10:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93eff5275212790fcbbf833655001e19ceab9e817d39526194e915ddfd9abb3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93eff5275212790fcbbf833655001e19ceab9e817d39526194e915ddfd9abb3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93eff5275212790fcbbf833655001e19ceab9e817d39526194e915ddfd9abb3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 23 10:10:55 compute-0 podman[249419]: 2026-01-23 10:10:55.568540616 +0000 UTC m=+0.132362082 container init 8bb2b340aee9aaa97f7ca83d13eb4f7fb24f714a49c3eeb6a3ace4a28aac7d35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 10:10:55 compute-0 podman[249419]: 2026-01-23 10:10:55.576540225 +0000 UTC m=+0.140361661 container start 8bb2b340aee9aaa97f7ca83d13eb4f7fb24f714a49c3eeb6a3ace4a28aac7d35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 10:10:55 compute-0 python3.9[249395]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Applying nova statedir ownership
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 23 10:10:55 compute-0 nova_compute_init[249441]: INFO:nova_statedir:Nova statedir ownership complete
Jan 23 10:10:55 compute-0 systemd[1]: libpod-8bb2b340aee9aaa97f7ca83d13eb4f7fb24f714a49c3eeb6a3ace4a28aac7d35.scope: Deactivated successfully.
Jan 23 10:10:55 compute-0 podman[249455]: 2026-01-23 10:10:55.679316406 +0000 UTC m=+0.029290802 container died 8bb2b340aee9aaa97f7ca83d13eb4f7fb24f714a49c3eeb6a3ace4a28aac7d35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 10:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bb2b340aee9aaa97f7ca83d13eb4f7fb24f714a49c3eeb6a3ace4a28aac7d35-userdata-shm.mount: Deactivated successfully.
Jan 23 10:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a93eff5275212790fcbbf833655001e19ceab9e817d39526194e915ddfd9abb3-merged.mount: Deactivated successfully.
Jan 23 10:10:55 compute-0 podman[249455]: 2026-01-23 10:10:55.732016919 +0000 UTC m=+0.081991315 container cleanup 8bb2b340aee9aaa97f7ca83d13eb4f7fb24f714a49c3eeb6a3ace4a28aac7d35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, config_id=edpm)
Jan 23 10:10:55 compute-0 sudo[249392]: pam_unix(sudo:session): session closed for user root
Jan 23 10:10:55 compute-0 systemd[1]: libpod-conmon-8bb2b340aee9aaa97f7ca83d13eb4f7fb24f714a49c3eeb6a3ace4a28aac7d35.scope: Deactivated successfully.
Jan 23 10:10:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:10:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:55.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:10:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:10:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:10:56 compute-0 nova_compute[249229]: 2026-01-23 10:10:56.528 249233 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 10:10:56 compute-0 nova_compute[249229]: 2026-01-23 10:10:56.528 249233 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 10:10:56 compute-0 nova_compute[249229]: 2026-01-23 10:10:56.529 249233 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 23 10:10:56 compute-0 nova_compute[249229]: 2026-01-23 10:10:56.529 249233 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 23 10:10:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:56 compute-0 sshd-session[223669]: Connection closed by 192.168.122.30 port 35524
Jan 23 10:10:56 compute-0 sshd-session[223642]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:10:56 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Jan 23 10:10:56 compute-0 systemd[1]: session-54.scope: Consumed 1min 58.208s CPU time.
Jan 23 10:10:56 compute-0 systemd-logind[784]: Session 54 logged out. Waiting for processes to exit.
Jan 23 10:10:56 compute-0 systemd-logind[784]: Removed session 54.
Jan 23 10:10:56 compute-0 nova_compute[249229]: 2026-01-23 10:10:56.686 249233 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:10:56 compute-0 nova_compute[249229]: 2026-01-23 10:10:56.701 249233 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:10:56 compute-0 nova_compute[249229]: 2026-01-23 10:10:56.702 249233 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 23 10:10:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:57 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:57 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60003420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:10:57.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.133 249233 INFO nova.virt.driver [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 23 10:10:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:57 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.252 249233 INFO nova.compute.provider_config [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.261 249233 DEBUG oslo_concurrency.lockutils [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.262 249233 DEBUG oslo_concurrency.lockutils [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.262 249233 DEBUG oslo_concurrency.lockutils [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.262 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.262 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.263 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.263 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.263 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.263 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.263 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.264 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.264 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.264 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.264 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.264 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.264 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.265 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.265 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.265 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.265 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.265 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.266 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.266 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.266 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.266 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.266 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.266 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.267 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.267 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.267 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.267 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.267 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.268 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.268 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.268 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.268 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.268 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.269 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.269 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.269 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.269 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.269 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.269 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.270 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.270 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.270 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.271 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.271 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.271 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.271 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.271 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.271 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.272 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.272 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.272 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.272 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.273 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.273 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.273 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.273 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.273 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.273 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.274 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.274 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.274 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.274 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.274 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.275 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.275 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.275 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.275 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.275 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.276 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.276 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.276 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.276 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.276 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.276 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.277 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.277 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.277 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.277 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.277 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.277 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.277 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.278 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.278 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.278 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.278 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.278 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.279 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.279 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.279 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.279 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.279 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.279 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.280 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.280 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.280 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.280 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.280 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.280 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.280 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.281 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.281 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.281 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.281 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.281 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.282 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.282 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.282 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.282 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.282 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.282 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.282 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.283 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.283 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.283 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.283 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.283 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.283 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.283 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.284 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.284 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.284 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.284 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.284 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.284 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.284 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.285 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.285 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.285 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.285 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:57.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.285 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.286 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.286 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.286 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.286 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.286 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.286 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.286 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.287 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.287 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.287 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.287 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.287 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.287 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.288 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.288 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.288 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.288 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.288 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.289 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.289 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.289 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.289 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.289 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.290 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.290 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.290 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.290 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.291 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.291 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.291 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.291 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.291 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.292 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.292 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.292 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.292 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.292 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.292 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.293 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.293 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.293 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.293 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.293 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.294 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.294 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.294 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.294 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.295 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.295 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.295 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.295 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.295 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.295 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.296 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.296 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.296 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.296 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.296 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.296 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.297 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.297 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.297 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.297 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.297 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.297 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.297 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.298 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.298 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.298 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.298 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.298 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.298 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.298 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.299 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.299 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.299 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.299 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.299 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.299 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.299 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.300 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.300 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.300 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.300 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.300 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.301 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.301 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.301 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.301 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.301 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.301 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.301 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.302 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.302 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.302 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.302 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.302 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.302 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.302 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.303 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.303 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.303 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.303 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.303 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.303 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.303 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.304 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.304 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.304 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.304 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.304 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.304 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.304 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.305 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.305 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.305 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.305 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.305 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.305 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.305 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.306 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.306 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.306 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.306 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.306 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.306 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.306 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.307 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.307 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.307 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.307 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.307 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.307 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.308 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.308 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.308 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.308 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.308 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.308 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.308 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.309 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.309 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.309 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.309 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.309 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.309 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.310 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.310 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.310 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.310 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.310 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.311 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.311 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.311 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.311 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.311 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.311 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.312 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.312 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.312 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.312 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.312 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.313 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.313 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.313 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.313 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.313 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.314 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.314 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.314 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.314 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.314 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.315 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.315 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.315 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.315 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.315 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.315 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.316 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.316 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.316 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.316 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.316 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.317 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.317 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.317 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.317 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.317 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.317 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.317 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.317 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.318 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.318 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.318 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.318 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.318 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.318 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.319 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.319 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.319 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.319 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.319 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.319 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.320 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.320 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.320 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.320 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.320 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.320 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.321 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.321 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.321 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.321 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.321 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.321 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.321 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.322 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.322 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.322 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.322 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.322 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.322 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.323 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.323 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.323 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.323 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.324 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.324 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.324 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.324 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.324 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.325 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.325 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.325 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.325 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.325 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.325 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.325 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.326 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.326 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.326 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.326 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.326 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.326 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.326 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.326 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.327 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.327 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.327 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.327 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.327 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.327 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.327 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.328 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.328 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.328 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.328 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.328 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.328 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.329 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.329 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.329 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.329 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.329 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.329 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.329 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.330 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.330 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.330 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.330 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.330 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.330 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.331 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.331 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.331 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.331 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.331 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.331 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.332 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.332 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.332 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.332 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.332 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.332 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.333 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.333 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.333 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.333 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.333 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.333 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.333 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.334 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.334 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.334 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.334 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.334 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.335 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.335 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.335 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.335 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.335 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.335 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.336 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.336 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.336 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.336 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.336 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.336 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.337 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.337 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.337 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.337 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.337 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.337 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.337 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.338 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.338 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.338 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.338 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.338 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.338 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.338 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.339 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.339 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.339 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.339 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.339 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.339 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.340 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.340 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.340 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.340 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.340 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.340 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.340 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.341 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.341 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.341 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.341 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.341 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.342 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.342 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.342 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.342 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.342 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.342 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.343 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.343 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.343 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.343 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.343 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.343 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.343 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.344 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.344 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.344 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.344 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.344 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.344 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.345 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.345 249233 WARNING oslo_config.cfg [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 23 10:10:57 compute-0 nova_compute[249229]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 23 10:10:57 compute-0 nova_compute[249229]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 23 10:10:57 compute-0 nova_compute[249229]: and ``live_migration_inbound_addr`` respectively.
Jan 23 10:10:57 compute-0 nova_compute[249229]: ).  Its value may be silently ignored in the future.
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.345 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.345 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.345 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.346 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.346 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.346 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.346 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.346 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.346 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.346 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.347 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.347 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.347 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.347 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.347 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.348 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.348 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.348 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.348 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rbd_secret_uuid        = f3005f84-239a-55b6-a948-8f1fb592b920 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.348 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.348 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.349 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.349 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.349 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.349 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.349 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.349 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.349 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.350 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.350 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.350 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.350 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.350 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.350 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.351 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.351 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.351 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.351 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.351 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.351 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.351 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.352 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.352 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.352 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.352 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.352 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.353 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.353 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.353 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.353 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.353 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.353 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.353 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.354 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.354 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.354 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.354 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.354 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.354 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.354 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.355 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.355 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.355 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.355 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.355 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.355 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.355 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.356 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.356 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.356 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.356 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.356 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.356 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.356 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.357 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.357 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.357 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.357 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.357 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.357 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.357 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.358 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.358 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.358 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.358 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.358 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.358 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.358 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.359 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.359 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.359 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.359 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.359 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.359 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.359 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.360 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.360 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.360 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.360 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.360 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.360 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.360 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.361 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.361 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.361 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.361 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.361 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.361 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.361 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.362 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.362 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.362 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.362 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.362 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.362 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.362 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.363 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.363 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.363 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.363 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.363 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.363 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.364 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.364 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.364 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.364 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.364 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.364 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.365 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.365 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.365 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.365 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.365 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.365 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.366 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.366 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.366 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.366 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.366 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.367 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.367 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.367 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.367 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.367 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.367 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.367 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.368 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.368 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.368 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.368 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.368 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.369 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.369 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.369 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.369 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.369 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.369 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.369 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.370 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.370 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.370 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.370 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.370 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.370 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.371 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.371 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:57 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:10:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:57 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.371 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.371 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.371 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.371 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.372 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.372 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.372 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.372 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.372 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.372 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.373 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.373 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.373 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.373 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.373 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.374 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.374 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.374 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.374 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.374 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.374 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.375 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.375 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.375 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.375 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.375 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.375 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.375 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.376 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.376 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.376 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.376 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.376 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.377 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.377 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.377 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.377 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.377 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.377 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.378 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.378 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.378 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.378 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.378 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.379 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.379 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.379 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.379 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.379 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.379 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.380 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.380 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.380 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.380 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.380 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.380 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.380 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.381 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.381 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.381 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.381 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.381 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.381 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.381 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.382 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.382 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.382 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.382 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.382 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.382 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.383 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.383 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.383 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.383 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.383 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.384 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.384 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.384 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.384 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.384 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.385 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.385 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.385 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.385 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.385 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.386 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.386 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.386 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.386 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.386 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.387 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.387 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.387 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.387 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.387 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.388 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.388 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.388 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.388 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.388 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.388 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.389 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.389 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.389 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.389 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.389 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.389 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.389 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.390 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.390 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.390 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.390 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.390 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.390 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.391 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.391 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.391 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.391 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.391 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.391 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.392 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.392 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.392 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.392 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.392 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.393 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.393 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.393 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.393 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.393 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.393 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.394 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.394 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.394 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.394 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.394 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.394 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.395 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.395 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.395 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.395 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.395 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.396 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.396 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.396 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.396 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.396 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.396 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.397 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.397 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.397 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.397 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.397 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.397 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.397 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.398 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.398 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.398 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.398 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.398 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.398 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.399 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.399 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.399 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.399 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.399 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.399 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.399 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.400 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.400 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.400 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.400 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.400 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.401 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.401 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.401 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.401 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.401 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.401 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.402 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.402 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.402 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.402 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.402 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.402 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.402 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.403 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.403 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.403 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.403 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.403 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.403 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.403 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.404 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.404 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.404 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.404 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.404 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.404 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.405 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.405 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.405 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.405 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.405 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.406 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.406 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.406 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.406 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.406 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.406 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.406 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.407 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.407 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.407 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.407 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.407 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.407 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.408 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.408 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.408 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.408 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.408 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.409 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.409 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.409 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.409 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.409 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.410 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.410 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.410 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.410 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.410 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.410 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.411 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.411 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.411 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.411 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.411 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.411 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.412 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.412 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.412 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.412 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.412 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.412 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.413 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.413 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.413 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.413 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.414 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.414 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.414 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.414 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.414 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.415 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.415 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.415 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.415 249233 DEBUG oslo_service.service [None req-2510b38c-b544-4f84-9a9a-ed263ec54c24 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.416 249233 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.430 249233 INFO nova.virt.node [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Determined node identity a1f82a16-d7e7-4500-99d7-a20de995d9a2 from /var/lib/nova/compute_id
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.431 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.431 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.431 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.432 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.443 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb0e67a7610> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.446 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb0e67a7610> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.447 249233 INFO nova.virt.libvirt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Connection event '1' reason 'None'
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.455 249233 INFO nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Libvirt host capabilities <capabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]: 
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <host>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <uuid>f03a0360-43fd-4fa3-b498-9716505b3cad</uuid>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <arch>x86_64</arch>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model>EPYC-Rome-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <vendor>AMD</vendor>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <microcode version='16777317'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <signature family='23' model='49' stepping='0'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='x2apic'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='tsc-deadline'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='osxsave'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='hypervisor'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='tsc_adjust'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='spec-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='stibp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='arch-capabilities'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='cmp_legacy'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='topoext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='virt-ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='lbrv'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='tsc-scale'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='vmcb-clean'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='pause-filter'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='pfthreshold'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='svme-addr-chk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='rdctl-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='skip-l1dfl-vmentry'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='mds-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature name='pschange-mc-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <pages unit='KiB' size='4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <pages unit='KiB' size='2048'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <pages unit='KiB' size='1048576'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <power_management>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <suspend_mem/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </power_management>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <iommu support='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <migration_features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <live/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <uri_transports>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <uri_transport>tcp</uri_transport>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <uri_transport>rdma</uri_transport>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </uri_transports>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </migration_features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <topology>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <cells num='1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <cell id='0'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:           <memory unit='KiB'>7864316</memory>
Jan 23 10:10:57 compute-0 nova_compute[249229]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 23 10:10:57 compute-0 nova_compute[249229]:           <pages unit='KiB' size='2048'>0</pages>
Jan 23 10:10:57 compute-0 nova_compute[249229]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 23 10:10:57 compute-0 nova_compute[249229]:           <distances>
Jan 23 10:10:57 compute-0 nova_compute[249229]:             <sibling id='0' value='10'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:           </distances>
Jan 23 10:10:57 compute-0 nova_compute[249229]:           <cpus num='8'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:           </cpus>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         </cell>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </cells>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </topology>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <cache>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </cache>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <secmodel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model>selinux</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <doi>0</doi>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </secmodel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <secmodel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model>dac</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <doi>0</doi>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </secmodel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </host>
Jan 23 10:10:57 compute-0 nova_compute[249229]: 
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <guest>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <os_type>hvm</os_type>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <arch name='i686'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <wordsize>32</wordsize>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <domain type='qemu'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <domain type='kvm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </arch>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <pae/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <nonpae/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <acpi default='on' toggle='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <apic default='on' toggle='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <cpuselection/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <deviceboot/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <disksnapshot default='on' toggle='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <externalSnapshot/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </guest>
Jan 23 10:10:57 compute-0 nova_compute[249229]: 
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <guest>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <os_type>hvm</os_type>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <arch name='x86_64'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <wordsize>64</wordsize>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <domain type='qemu'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <domain type='kvm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </arch>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <acpi default='on' toggle='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <apic default='on' toggle='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <cpuselection/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <deviceboot/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <disksnapshot default='on' toggle='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <externalSnapshot/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </guest>
Jan 23 10:10:57 compute-0 nova_compute[249229]: 
Jan 23 10:10:57 compute-0 nova_compute[249229]: </capabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]: 
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.459 249233 DEBUG nova.virt.libvirt.volume.mount [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.461 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.467 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 23 10:10:57 compute-0 nova_compute[249229]: <domainCapabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <domain>kvm</domain>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <arch>i686</arch>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <vcpu max='4096'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <iothreads supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <os supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <enum name='firmware'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <loader supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>rom</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pflash</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='readonly'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>yes</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>no</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='secure'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>no</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </loader>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </os>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='host-passthrough' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='hostPassthroughMigratable'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>on</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>off</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='maximum' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='maximumMigratable'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>on</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>off</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='host-model' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <vendor>AMD</vendor>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='x2apic'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='hypervisor'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='stibp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='overflow-recov'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='succor'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='lbrv'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc-scale'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='flushbyasid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='pause-filter'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='pfthreshold'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='disable' name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='custom' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='ClearwaterForest'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ddpd-u'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sha512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='ClearwaterForest-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ddpd-u'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sha512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Dhyana-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Turin'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbpb'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Turin-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbpb'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-128'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-256'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-128'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-256'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v6'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v7'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='KnightsMill'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512er'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512pf'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='KnightsMill-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512er'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512pf'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G4-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tbm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G5-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tbm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='athlon'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='athlon-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='core2duo'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='core2duo-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='coreduo'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='coreduo-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='n270'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='n270-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='phenom'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='phenom-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <memoryBacking supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <enum name='sourceType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>file</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>anonymous</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>memfd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </memoryBacking>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <devices>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <disk supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='diskDevice'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>disk</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>cdrom</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>floppy</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>lun</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='bus'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>fdc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>scsi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>sata</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-non-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <graphics supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vnc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>egl-headless</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dbus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </graphics>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <video supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='modelType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vga</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>cirrus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>none</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>bochs</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ramfb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </video>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <hostdev supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='mode'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>subsystem</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='startupPolicy'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>default</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>mandatory</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>requisite</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>optional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='subsysType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pci</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>scsi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='capsType'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='pciBackend'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </hostdev>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <rng supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-non-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>random</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>egd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>builtin</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </rng>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <filesystem supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='driverType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>path</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>handle</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtiofs</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </filesystem>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <tpm supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tpm-tis</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tpm-crb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>emulator</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>external</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendVersion'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>2.0</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </tpm>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <redirdev supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='bus'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </redirdev>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <channel supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pty</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>unix</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </channel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <crypto supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>qemu</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>builtin</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </crypto>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <interface supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>default</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>passt</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </interface>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <panic supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>isa</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>hyperv</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </panic>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <console supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>null</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pty</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dev</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>file</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pipe</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>stdio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>udp</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tcp</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>unix</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>qemu-vdagent</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dbus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </console>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </devices>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <gic supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <vmcoreinfo supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <genid supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <backingStoreInput supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <backup supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <async-teardown supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <s390-pv supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <ps2 supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <tdx supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <sev supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <sgx supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <hyperv supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='features'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>relaxed</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vapic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>spinlocks</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vpindex</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>runtime</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>synic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>stimer</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>reset</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vendor_id</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>frequencies</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>reenlightenment</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tlbflush</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ipi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>avic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>emsr_bitmap</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>xmm_input</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <defaults>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <spinlocks>4095</spinlocks>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <stimer_direct>on</stimer_direct>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </defaults>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </hyperv>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <launchSecurity supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </features>
Jan 23 10:10:57 compute-0 nova_compute[249229]: </domainCapabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.473 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 23 10:10:57 compute-0 nova_compute[249229]: <domainCapabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <domain>kvm</domain>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <arch>i686</arch>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <vcpu max='240'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <iothreads supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <os supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <enum name='firmware'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <loader supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>rom</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pflash</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='readonly'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>yes</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>no</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='secure'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>no</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </loader>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </os>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='host-passthrough' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='hostPassthroughMigratable'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>on</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>off</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='maximum' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='maximumMigratable'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>on</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>off</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='host-model' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <vendor>AMD</vendor>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='x2apic'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='hypervisor'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='stibp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='overflow-recov'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='succor'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='lbrv'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc-scale'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='flushbyasid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='pause-filter'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='pfthreshold'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='disable' name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='custom' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='ClearwaterForest'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ddpd-u'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sha512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='ClearwaterForest-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ddpd-u'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sha512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Dhyana-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Turin'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbpb'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Turin-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbpb'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-128'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-256'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-128'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-256'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 ceph-mon[74335]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v6'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v7'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='KnightsMill'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512er'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512pf'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='KnightsMill-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512er'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512pf'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G4-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tbm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G5-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tbm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:57 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='athlon'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='athlon-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='core2duo'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='core2duo-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='coreduo'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='coreduo-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='n270'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='n270-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='phenom'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='phenom-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <memoryBacking supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <enum name='sourceType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>file</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>anonymous</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>memfd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </memoryBacking>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <devices>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <disk supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='diskDevice'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>disk</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>cdrom</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>floppy</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>lun</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='bus'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ide</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>fdc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>scsi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>sata</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-non-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <graphics supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vnc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>egl-headless</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dbus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </graphics>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <video supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='modelType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vga</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>cirrus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>none</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>bochs</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ramfb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </video>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <hostdev supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='mode'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>subsystem</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='startupPolicy'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>default</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>mandatory</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>requisite</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>optional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='subsysType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pci</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>scsi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='capsType'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='pciBackend'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </hostdev>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <rng supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-non-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>random</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>egd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>builtin</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </rng>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <filesystem supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='driverType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>path</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>handle</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtiofs</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </filesystem>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <tpm supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tpm-tis</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tpm-crb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>emulator</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>external</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendVersion'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>2.0</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </tpm>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <redirdev supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='bus'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </redirdev>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <channel supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pty</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>unix</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </channel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <crypto supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>qemu</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>builtin</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </crypto>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <interface supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>default</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>passt</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </interface>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <panic supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>isa</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>hyperv</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </panic>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <console supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>null</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pty</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dev</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>file</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pipe</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>stdio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>udp</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tcp</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>unix</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>qemu-vdagent</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dbus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </console>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </devices>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <gic supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <vmcoreinfo supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <genid supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <backingStoreInput supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <backup supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <async-teardown supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <s390-pv supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <ps2 supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <tdx supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <sev supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <sgx supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <hyperv supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='features'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>relaxed</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vapic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>spinlocks</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vpindex</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>runtime</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>synic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>stimer</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>reset</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vendor_id</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>frequencies</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>reenlightenment</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tlbflush</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ipi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>avic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>emsr_bitmap</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>xmm_input</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <defaults>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <spinlocks>4095</spinlocks>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <stimer_direct>on</stimer_direct>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </defaults>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </hyperv>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <launchSecurity supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </features>
Jan 23 10:10:57 compute-0 nova_compute[249229]: </domainCapabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.527 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.532 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 23 10:10:57 compute-0 nova_compute[249229]: <domainCapabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <domain>kvm</domain>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <arch>x86_64</arch>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <vcpu max='4096'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <iothreads supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <os supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <enum name='firmware'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>efi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <loader supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>rom</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pflash</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='readonly'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>yes</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>no</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='secure'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>yes</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>no</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </loader>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </os>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='host-passthrough' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='hostPassthroughMigratable'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>on</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>off</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='maximum' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='maximumMigratable'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>on</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>off</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='host-model' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <vendor>AMD</vendor>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='x2apic'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='hypervisor'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='stibp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='overflow-recov'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='succor'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='lbrv'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc-scale'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='flushbyasid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='pause-filter'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='pfthreshold'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='disable' name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='custom' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='ClearwaterForest'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ddpd-u'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sha512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='ClearwaterForest-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ddpd-u'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sha512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Dhyana-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Turin'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbpb'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Turin-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbpb'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-128'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-256'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-128'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-256'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v6'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v7'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='KnightsMill'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512er'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512pf'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='KnightsMill-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512er'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512pf'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G4-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tbm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G5-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tbm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='athlon'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='athlon-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='core2duo'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='core2duo-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='coreduo'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='coreduo-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='n270'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='n270-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='phenom'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='phenom-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <memoryBacking supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <enum name='sourceType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>file</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>anonymous</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>memfd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </memoryBacking>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <devices>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <disk supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='diskDevice'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>disk</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>cdrom</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>floppy</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>lun</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='bus'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>fdc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>scsi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>sata</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-non-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <graphics supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vnc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>egl-headless</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dbus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </graphics>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <video supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='modelType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vga</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>cirrus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>none</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>bochs</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ramfb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </video>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <hostdev supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='mode'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>subsystem</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='startupPolicy'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>default</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>mandatory</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>requisite</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>optional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='subsysType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pci</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>scsi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='capsType'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='pciBackend'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </hostdev>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <rng supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-non-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>random</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>egd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>builtin</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </rng>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <filesystem supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='driverType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>path</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>handle</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtiofs</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </filesystem>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <tpm supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tpm-tis</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tpm-crb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>emulator</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>external</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendVersion'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>2.0</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </tpm>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <redirdev supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='bus'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </redirdev>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <channel supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pty</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>unix</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </channel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <crypto supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>qemu</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>builtin</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </crypto>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <interface supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>default</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>passt</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </interface>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <panic supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>isa</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>hyperv</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </panic>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <console supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>null</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pty</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dev</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>file</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pipe</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>stdio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>udp</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tcp</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>unix</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>qemu-vdagent</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dbus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </console>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </devices>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <gic supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <vmcoreinfo supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <genid supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <backingStoreInput supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <backup supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <async-teardown supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <s390-pv supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <ps2 supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <tdx supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <sev supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <sgx supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <hyperv supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='features'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>relaxed</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vapic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>spinlocks</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vpindex</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>runtime</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>synic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>stimer</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>reset</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vendor_id</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>frequencies</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>reenlightenment</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tlbflush</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ipi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>avic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>emsr_bitmap</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>xmm_input</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <defaults>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <spinlocks>4095</spinlocks>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <stimer_direct>on</stimer_direct>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </defaults>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </hyperv>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <launchSecurity supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </features>
Jan 23 10:10:57 compute-0 nova_compute[249229]: </domainCapabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.618 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 23 10:10:57 compute-0 nova_compute[249229]: <domainCapabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <path>/usr/libexec/qemu-kvm</path>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <domain>kvm</domain>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <arch>x86_64</arch>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <vcpu max='240'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <iothreads supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <os supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <enum name='firmware'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <loader supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>rom</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pflash</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='readonly'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>yes</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>no</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='secure'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>no</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </loader>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </os>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='host-passthrough' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='hostPassthroughMigratable'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>on</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>off</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='maximum' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='maximumMigratable'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>on</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>off</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='host-model' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <vendor>AMD</vendor>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='x2apic'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc-deadline'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='hypervisor'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc_adjust'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='spec-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='stibp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='cmp_legacy'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='overflow-recov'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='succor'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='amd-ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='virt-ssbd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='lbrv'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='tsc-scale'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='vmcb-clean'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='flushbyasid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='pause-filter'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='pfthreshold'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='svme-addr-chk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <feature policy='disable' name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <mode name='custom' supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Broadwell-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cascadelake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='ClearwaterForest'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ddpd-u'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sha512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='ClearwaterForest-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ddpd-u'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sha512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm3'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sm4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Cooperlake-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Denverton-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Dhyana-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Genoa-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Milan-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Rome-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Turin'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbpb'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-Turin-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amd-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='auto-ibrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vp2intersect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fs-gs-base-ns'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibpb-brtype'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='no-nested-data-bp'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='null-sel-clr-base'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='perfmon-v2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbpb'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='srso-user-kernel-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='stibp-always-on'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='EPYC-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-128'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-256'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='GraniteRapids-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-128'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-256'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx10-512'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='prefetchiti'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Haswell-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-noTSX'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v6'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Icelake-Server-v7'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='IvyBridge-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='KnightsMill'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512er'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512pf'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='KnightsMill-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4fmaps'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-4vnniw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512er'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512pf'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G4-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tbm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Opteron_G5-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fma4'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tbm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xop'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SapphireRapids-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='amx-tile'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-bf16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-fp16'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512-vpopcntdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bitalg'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vbmi2'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrc'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fzrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='la57'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='taa-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='tsx-ldtrk'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='SierraForest-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ifma'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-ne-convert'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx-vnni-int8'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bhi-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='bus-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cmpccxadd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fbsdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='fsrs'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ibrs-all'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='intel-psfd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ipred-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='lam'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mcdt-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pbrsb-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='psdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rrsba-ctrl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='sbdr-ssdp-no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='serialize'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vaes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='vpclmulqdq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Client-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='hle'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='rtm'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Skylake-Server-v5'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512bw'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512cd'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512dq'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512f'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='avx512vl'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='invpcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pcid'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='pku'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='mpx'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v2'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v3'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='core-capability'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='split-lock-detect'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='Snowridge-v4'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='cldemote'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='erms'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='gfni'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdir64b'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='movdiri'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='xsaves'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='athlon'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='athlon-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='core2duo'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='core2duo-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='coreduo'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='coreduo-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='n270'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='n270-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='ss'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='phenom'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <blockers model='phenom-v1'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnow'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <feature name='3dnowext'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </blockers>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </mode>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </cpu>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <memoryBacking supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <enum name='sourceType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>file</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>anonymous</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <value>memfd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </memoryBacking>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <devices>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <disk supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='diskDevice'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>disk</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>cdrom</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>floppy</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>lun</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='bus'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ide</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>fdc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>scsi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>sata</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-non-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <graphics supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vnc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>egl-headless</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dbus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </graphics>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <video supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='modelType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vga</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>cirrus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>none</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>bochs</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ramfb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </video>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <hostdev supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='mode'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>subsystem</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='startupPolicy'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>default</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>mandatory</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>requisite</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>optional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='subsysType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pci</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>scsi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='capsType'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='pciBackend'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </hostdev>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <rng supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtio-non-transitional</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>random</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>egd</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>builtin</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </rng>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <filesystem supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='driverType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>path</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>handle</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>virtiofs</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </filesystem>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <tpm supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tpm-tis</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tpm-crb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>emulator</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>external</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendVersion'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>2.0</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </tpm>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <redirdev supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='bus'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>usb</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </redirdev>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <channel supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pty</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>unix</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </channel>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <crypto supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>qemu</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendModel'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>builtin</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </crypto>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <interface supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='backendType'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>default</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>passt</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </interface>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <panic supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='model'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>isa</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>hyperv</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </panic>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <console supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='type'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>null</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vc</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pty</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dev</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>file</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>pipe</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>stdio</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>udp</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tcp</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>unix</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>qemu-vdagent</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>dbus</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </console>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </devices>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   <features>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <gic supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <vmcoreinfo supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <genid supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <backingStoreInput supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <backup supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <async-teardown supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <s390-pv supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <ps2 supported='yes'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <tdx supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <sev supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <sgx supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <hyperv supported='yes'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <enum name='features'>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>relaxed</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vapic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>spinlocks</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vpindex</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>runtime</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>synic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>stimer</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>reset</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>vendor_id</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>frequencies</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>reenlightenment</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>tlbflush</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>ipi</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>avic</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>emsr_bitmap</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <value>xmm_input</value>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </enum>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       <defaults>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <spinlocks>4095</spinlocks>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <stimer_direct>on</stimer_direct>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <tlbflush_direct>on</tlbflush_direct>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <tlbflush_extended>on</tlbflush_extended>
Jan 23 10:10:57 compute-0 nova_compute[249229]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 10:10:57 compute-0 nova_compute[249229]:       </defaults>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     </hyperv>
Jan 23 10:10:57 compute-0 nova_compute[249229]:     <launchSecurity supported='no'/>
Jan 23 10:10:57 compute-0 nova_compute[249229]:   </features>
Jan 23 10:10:57 compute-0 nova_compute[249229]: </domainCapabilities>
Jan 23 10:10:57 compute-0 nova_compute[249229]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.696 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.697 249233 INFO nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Secure Boot support detected
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.699 249233 INFO nova.virt.libvirt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.699 249233 INFO nova.virt.libvirt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.708 249233 DEBUG nova.virt.libvirt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.724 249233 INFO nova.virt.node [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Determined node identity a1f82a16-d7e7-4500-99d7-a20de995d9a2 from /var/lib/nova/compute_id
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.745 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Verified node a1f82a16-d7e7-4500-99d7-a20de995d9a2 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.780 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 23 10:10:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:57.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.850 249233 DEBUG oslo_concurrency.lockutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.850 249233 DEBUG oslo_concurrency.lockutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.850 249233 DEBUG oslo_concurrency.lockutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.850 249233 DEBUG nova.compute.resource_tracker [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:10:57 compute-0 nova_compute[249229]: 2026-01-23 10:10:57.851 249233 DEBUG oslo_concurrency.processutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.311 249233 DEBUG oslo_concurrency.processutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:10:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:10:58 compute-0 rsyslogd[1003]: imjournal from <np0005593293:nova_compute>: begin to drop messages due to rate-limiting
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.461 249233 WARNING nova.virt.libvirt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.462 249233 DEBUG nova.compute.resource_tracker [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.462 249233 DEBUG oslo_concurrency.lockutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.462 249233 DEBUG oslo_concurrency.lockutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:10:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1853406190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/758454433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1515803724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.735 249233 DEBUG nova.compute.resource_tracker [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.735 249233 DEBUG nova.compute.resource_tracker [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.800 249233 DEBUG nova.scheduler.client.report [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Refreshing inventories for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.835 249233 DEBUG nova.scheduler.client.report [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Updating ProviderTree inventory for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.835 249233 DEBUG nova.compute.provider_tree [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.857 249233 DEBUG nova.scheduler.client.report [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Refreshing aggregate associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.877 249233 DEBUG nova.scheduler.client.report [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Refreshing trait associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 10:10:58 compute-0 nova_compute[249229]: 2026-01-23 10:10:58.894 249233 DEBUG oslo_concurrency.processutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:10:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:59 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:10:59 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:10:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:59.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:10:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862771379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.333 249233 DEBUG oslo_concurrency.processutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.339 249233 DEBUG nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 23 10:10:59 compute-0 nova_compute[249229]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.339 249233 INFO nova.virt.libvirt.host [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] kernel doesn't support AMD SEV
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.340 249233 DEBUG nova.compute.provider_tree [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.341 249233 DEBUG nova.virt.libvirt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.385 249233 DEBUG nova.scheduler.client.report [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Updated inventory for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.386 249233 DEBUG nova.compute.provider_tree [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Updating resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.386 249233 DEBUG nova.compute.provider_tree [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.475 249233 DEBUG nova.compute.provider_tree [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Updating resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.505 249233 DEBUG nova.compute.resource_tracker [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.505 249233 DEBUG oslo_concurrency.lockutils [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.505 249233 DEBUG nova.service [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 23 10:10:59 compute-0 podman[249577]: 2026-01-23 10:10:59.55699716 +0000 UTC m=+0.077926699 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.628 249233 DEBUG nova.service [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 23 10:10:59 compute-0 nova_compute[249229]: 2026-01-23 10:10:59.628 249233 DEBUG nova.servicegroup.drivers.db [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 23 10:10:59 compute-0 ceph-mon[74335]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:10:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1862771379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2360008108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1031836071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:10:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:10:59.762 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:10:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:10:59.764 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:10:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:10:59.764 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:10:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:10:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:10:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:59.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:10:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:59] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:10:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:10:59] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:11:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:11:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:11:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60003420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:00 compute-0 ceph-mon[74335]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:11:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:01 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:01 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:01.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:01 compute-0 sudo[249607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:11:01 compute-0 sudo[249607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:01 compute-0 sudo[249607]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:01.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 23 10:11:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:03 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60003420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:03 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:03.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:03 compute-0 ceph-mon[74335]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 23 10:11:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:03.576Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:11:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:03.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:11:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:03.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 23 10:11:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101104 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:11:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:05 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:05 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60003420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:11:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:11:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:05.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:05.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:05 compute-0 ceph-mon[74335]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 23 10:11:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:05.947741) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163065948181, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1167, "num_deletes": 251, "total_data_size": 2141110, "memory_usage": 2170872, "flush_reason": "Manual Compaction"}
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 23 10:11:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163065969982, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 2091630, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18936, "largest_seqno": 20102, "table_properties": {"data_size": 2086047, "index_size": 2978, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12014, "raw_average_key_size": 19, "raw_value_size": 2074807, "raw_average_value_size": 3435, "num_data_blocks": 132, "num_entries": 604, "num_filter_entries": 604, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162946, "oldest_key_time": 1769162946, "file_creation_time": 1769163065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 22240 microseconds, and 13676 cpu microseconds.
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:05.970096) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 2091630 bytes OK
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:05.970144) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:05.972069) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:05.972101) EVENT_LOG_v1 {"time_micros": 1769163065972095, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:05.972120) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2135930, prev total WAL file size 2135930, number of live WAL files 2.
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:05.973202) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(2042KB)], [41(12MB)]
Jan 23 10:11:05 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163065973425, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15518902, "oldest_snapshot_seqno": -1}
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5038 keys, 13258609 bytes, temperature: kUnknown
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163066090605, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13258609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13223404, "index_size": 21527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 128419, "raw_average_key_size": 25, "raw_value_size": 13130259, "raw_average_value_size": 2606, "num_data_blocks": 885, "num_entries": 5038, "num_filter_entries": 5038, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769163065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:06.090938) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13258609 bytes
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:06.196450) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.3 rd, 113.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 12.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(13.8) write-amplify(6.3) OK, records in: 5556, records dropped: 518 output_compression: NoCompression
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:06.196493) EVENT_LOG_v1 {"time_micros": 1769163066196477, "job": 20, "event": "compaction_finished", "compaction_time_micros": 117265, "compaction_time_cpu_micros": 42502, "output_level": 6, "num_output_files": 1, "total_output_size": 13258609, "num_input_records": 5556, "num_output_records": 5038, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163066196993, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163066199711, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:05.972978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:06.199747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:06.199752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:06.199754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:06.199756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:11:06 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:11:06.199758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:11:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 23 10:11:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:07 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:07 compute-0 ceph-mon[74335]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 23 10:11:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:07 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:07.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:07.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:07.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:11:08 compute-0 podman[249640]: 2026-01-23 10:11:08.514493437 +0000 UTC m=+0.046724903 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 10:11:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60003420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:09 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:09 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60003420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:09.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:09 compute-0 ceph-mon[74335]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:11:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:09] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:11:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:09] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:11:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:11:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:11 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:11 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:11 compute-0 ceph-mon[74335]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:11:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:11.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:11:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60003420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:13 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:13 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:13 compute-0 ceph-mon[74335]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:11:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:13.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:13.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:11:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:13.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:11:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:15 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60003420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:15 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:15.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:15 compute-0 ceph-mon[74335]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:11:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:15.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:11:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:16 compute-0 nova_compute[249229]: 2026-01-23 10:11:16.631 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:16 compute-0 nova_compute[249229]: 2026-01-23 10:11:16.834 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:17 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:17 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60004d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:17.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:17 compute-0 ceph-mon[74335]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:17.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:17 compute-0 sudo[249668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:11:17 compute-0 sudo[249668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:17 compute-0 sudo[249668]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:17 compute-0 sudo[249693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:11:17 compute-0 sudo[249693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:17.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:11:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:11:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:18 compute-0 sudo[249693]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:11:18 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:11:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:11:18 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:11:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:11:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:18 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:11:18 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:11:18 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:11:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:11:18 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:11:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:11:18 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:11:18 compute-0 sudo[249751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:11:18 compute-0 sudo[249751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:18 compute-0 sudo[249751]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:18 compute-0 sudo[249776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:11:18 compute-0 sudo[249776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:19 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:19 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:19 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:19 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:19 compute-0 ceph-mon[74335]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:19 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:11:19 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:11:19 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:19 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:19 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:11:19 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:11:19 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:11:19 compute-0 podman[249838]: 2026-01-23 10:11:19.271815124 +0000 UTC m=+0.061674879 container create 70b0ad26d39a028d5d1c9ec8808c934b881f528bea12cf16f252511ac090f615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 10:11:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:19.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:19 compute-0 systemd[1]: Started libpod-conmon-70b0ad26d39a028d5d1c9ec8808c934b881f528bea12cf16f252511ac090f615.scope.
Jan 23 10:11:19 compute-0 podman[249838]: 2026-01-23 10:11:19.23715648 +0000 UTC m=+0.027016265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:11:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:11:19 compute-0 podman[249838]: 2026-01-23 10:11:19.363767458 +0000 UTC m=+0.153627243 container init 70b0ad26d39a028d5d1c9ec8808c934b881f528bea12cf16f252511ac090f615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:11:19 compute-0 podman[249838]: 2026-01-23 10:11:19.371108431 +0000 UTC m=+0.160968186 container start 70b0ad26d39a028d5d1c9ec8808c934b881f528bea12cf16f252511ac090f615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:11:19 compute-0 podman[249838]: 2026-01-23 10:11:19.374343972 +0000 UTC m=+0.164203747 container attach 70b0ad26d39a028d5d1c9ec8808c934b881f528bea12cf16f252511ac090f615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:11:19 compute-0 exciting_almeida[249854]: 167 167
Jan 23 10:11:19 compute-0 systemd[1]: libpod-70b0ad26d39a028d5d1c9ec8808c934b881f528bea12cf16f252511ac090f615.scope: Deactivated successfully.
Jan 23 10:11:19 compute-0 podman[249838]: 2026-01-23 10:11:19.379671355 +0000 UTC m=+0.169531140 container died 70b0ad26d39a028d5d1c9ec8808c934b881f528bea12cf16f252511ac090f615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Jan 23 10:11:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1714d73a8a32c52556382576ca909803092da930ed01bcae80c7f783f9adac1-merged.mount: Deactivated successfully.
Jan 23 10:11:19 compute-0 podman[249838]: 2026-01-23 10:11:19.417202961 +0000 UTC m=+0.207062716 container remove 70b0ad26d39a028d5d1c9ec8808c934b881f528bea12cf16f252511ac090f615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_almeida, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:11:19 compute-0 systemd[1]: libpod-conmon-70b0ad26d39a028d5d1c9ec8808c934b881f528bea12cf16f252511ac090f615.scope: Deactivated successfully.
Jan 23 10:11:19 compute-0 podman[249879]: 2026-01-23 10:11:19.631290872 +0000 UTC m=+0.102587180 container create 120f1737f6a4f97cc02d2bbacb668f9a548ba1e00f2afadbccf786b697d5fe76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:11:19 compute-0 podman[249879]: 2026-01-23 10:11:19.554008314 +0000 UTC m=+0.025304652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:11:19 compute-0 systemd[1]: Started libpod-conmon-120f1737f6a4f97cc02d2bbacb668f9a548ba1e00f2afadbccf786b697d5fe76.scope.
Jan 23 10:11:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3209658cea2cb3ad6664de58428752daeb627c25913471aaef56e41ff0917a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3209658cea2cb3ad6664de58428752daeb627c25913471aaef56e41ff0917a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3209658cea2cb3ad6664de58428752daeb627c25913471aaef56e41ff0917a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3209658cea2cb3ad6664de58428752daeb627c25913471aaef56e41ff0917a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3209658cea2cb3ad6664de58428752daeb627c25913471aaef56e41ff0917a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:19.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:19 compute-0 podman[249879]: 2026-01-23 10:11:19.871606177 +0000 UTC m=+0.342902535 container init 120f1737f6a4f97cc02d2bbacb668f9a548ba1e00f2afadbccf786b697d5fe76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:11:19 compute-0 podman[249879]: 2026-01-23 10:11:19.879917034 +0000 UTC m=+0.351213352 container start 120f1737f6a4f97cc02d2bbacb668f9a548ba1e00f2afadbccf786b697d5fe76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 23 10:11:19 compute-0 podman[249879]: 2026-01-23 10:11:19.886229112 +0000 UTC m=+0.357525420 container attach 120f1737f6a4f97cc02d2bbacb668f9a548ba1e00f2afadbccf786b697d5fe76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 10:11:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:19] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:11:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:19] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:11:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:11:19
Jan 23 10:11:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:11:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:11:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.log', 'default.rgw.control', 'volumes', 'images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.nfs']
Jan 23 10:11:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:11:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:11:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:11:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:11:20 compute-0 goofy_cannon[249895]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:11:20 compute-0 goofy_cannon[249895]: --> All data devices are unavailable
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:11:20 compute-0 systemd[1]: libpod-120f1737f6a4f97cc02d2bbacb668f9a548ba1e00f2afadbccf786b697d5fe76.scope: Deactivated successfully.
Jan 23 10:11:20 compute-0 podman[249879]: 2026-01-23 10:11:20.287980563 +0000 UTC m=+0.759276871 container died 120f1737f6a4f97cc02d2bbacb668f9a548ba1e00f2afadbccf786b697d5fe76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cannon, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3209658cea2cb3ad6664de58428752daeb627c25913471aaef56e41ff0917a-merged.mount: Deactivated successfully.
Jan 23 10:11:20 compute-0 podman[249879]: 2026-01-23 10:11:20.342282028 +0000 UTC m=+0.813578336 container remove 120f1737f6a4f97cc02d2bbacb668f9a548ba1e00f2afadbccf786b697d5fe76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:11:20 compute-0 systemd[1]: libpod-conmon-120f1737f6a4f97cc02d2bbacb668f9a548ba1e00f2afadbccf786b697d5fe76.scope: Deactivated successfully.
Jan 23 10:11:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:20 compute-0 sudo[249776]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:20 compute-0 sudo[249925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:11:20 compute-0 sudo[249925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:20 compute-0 sudo[249925]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:20 compute-0 sudo[249950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:11:20 compute-0 sudo[249950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60004d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:20 compute-0 podman[250017]: 2026-01-23 10:11:20.933388862 +0000 UTC m=+0.066402797 container create 77b9f70eceb16b0c271f1a8257956e17bfebfe1638890ddf13149bdbaf8f864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lehmann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 10:11:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:20 compute-0 podman[250017]: 2026-01-23 10:11:20.890490452 +0000 UTC m=+0.023504437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:11:20 compute-0 systemd[1]: Started libpod-conmon-77b9f70eceb16b0c271f1a8257956e17bfebfe1638890ddf13149bdbaf8f864e.scope.
Jan 23 10:11:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:11:21 compute-0 podman[250017]: 2026-01-23 10:11:21.020235619 +0000 UTC m=+0.153249584 container init 77b9f70eceb16b0c271f1a8257956e17bfebfe1638890ddf13149bdbaf8f864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 10:11:21 compute-0 podman[250017]: 2026-01-23 10:11:21.026235989 +0000 UTC m=+0.159249924 container start 77b9f70eceb16b0c271f1a8257956e17bfebfe1638890ddf13149bdbaf8f864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lehmann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:11:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:21 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:21 compute-0 inspiring_lehmann[250033]: 167 167
Jan 23 10:11:21 compute-0 podman[250017]: 2026-01-23 10:11:21.029571142 +0000 UTC m=+0.162585097 container attach 77b9f70eceb16b0c271f1a8257956e17bfebfe1638890ddf13149bdbaf8f864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:11:21 compute-0 systemd[1]: libpod-77b9f70eceb16b0c271f1a8257956e17bfebfe1638890ddf13149bdbaf8f864e.scope: Deactivated successfully.
Jan 23 10:11:21 compute-0 podman[250017]: 2026-01-23 10:11:21.030614738 +0000 UTC m=+0.163628683 container died 77b9f70eceb16b0c271f1a8257956e17bfebfe1638890ddf13149bdbaf8f864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:11:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:21 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-101a40888fdd6508086db5de8e77150c7ddbf8f59936378e329319f5707bd63b-merged.mount: Deactivated successfully.
Jan 23 10:11:21 compute-0 podman[250017]: 2026-01-23 10:11:21.079636441 +0000 UTC m=+0.212650376 container remove 77b9f70eceb16b0c271f1a8257956e17bfebfe1638890ddf13149bdbaf8f864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:11:21 compute-0 systemd[1]: libpod-conmon-77b9f70eceb16b0c271f1a8257956e17bfebfe1638890ddf13149bdbaf8f864e.scope: Deactivated successfully.
Jan 23 10:11:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 23 10:11:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:21.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 23 10:11:21 compute-0 podman[250056]: 2026-01-23 10:11:21.227726565 +0000 UTC m=+0.022139973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:11:21 compute-0 ceph-mon[74335]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:21 compute-0 podman[250056]: 2026-01-23 10:11:21.501737361 +0000 UTC m=+0.296150739 container create bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:11:21 compute-0 systemd[1]: Started libpod-conmon-bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c.scope.
Jan 23 10:11:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32b54f6935271e32655d2cda451f93f6292364b53e6055d210ff8f03af0aae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32b54f6935271e32655d2cda451f93f6292364b53e6055d210ff8f03af0aae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32b54f6935271e32655d2cda451f93f6292364b53e6055d210ff8f03af0aae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32b54f6935271e32655d2cda451f93f6292364b53e6055d210ff8f03af0aae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:21 compute-0 sudo[250075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:11:21 compute-0 sudo[250075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:21 compute-0 sudo[250075]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:21 compute-0 podman[250056]: 2026-01-23 10:11:21.767960191 +0000 UTC m=+0.562373579 container init bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:11:21 compute-0 podman[250056]: 2026-01-23 10:11:21.774213368 +0000 UTC m=+0.568626746 container start bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brahmagupta, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 23 10:11:21 compute-0 podman[250056]: 2026-01-23 10:11:21.779479379 +0000 UTC m=+0.573892787 container attach bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:11:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:21.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]: {
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:     "1": [
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:         {
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "devices": [
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "/dev/loop3"
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             ],
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "lv_name": "ceph_lv0",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "lv_size": "21470642176",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "name": "ceph_lv0",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "tags": {
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.cluster_name": "ceph",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.crush_device_class": "",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.encrypted": "0",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.osd_id": "1",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.type": "block",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.vdo": "0",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:                 "ceph.with_tpm": "0"
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             },
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "type": "block",
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:             "vg_name": "ceph_vg0"
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:         }
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]:     ]
Jan 23 10:11:22 compute-0 charming_brahmagupta[250073]: }
Jan 23 10:11:22 compute-0 systemd[1]: libpod-bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c.scope: Deactivated successfully.
Jan 23 10:11:22 compute-0 conmon[250073]: conmon bd0d494d84c537ed601d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c.scope/container/memory.events
Jan 23 10:11:22 compute-0 podman[250056]: 2026-01-23 10:11:22.076629771 +0000 UTC m=+0.871043169 container died bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brahmagupta, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 10:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a32b54f6935271e32655d2cda451f93f6292364b53e6055d210ff8f03af0aae9-merged.mount: Deactivated successfully.
Jan 23 10:11:22 compute-0 podman[250056]: 2026-01-23 10:11:22.2389198 +0000 UTC m=+1.033333178 container remove bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:11:22 compute-0 systemd[1]: libpod-conmon-bd0d494d84c537ed601df714ab07be1122430279ac4504e35a40d5a958bb634c.scope: Deactivated successfully.
Jan 23 10:11:22 compute-0 sudo[249950]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:22 compute-0 sudo[250120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:11:22 compute-0 sudo[250120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:22 compute-0 sudo[250120]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:11:22 compute-0 sudo[250145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:11:22 compute-0 sudo[250145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:22 compute-0 podman[250211]: 2026-01-23 10:11:22.831313197 +0000 UTC m=+0.040488471 container create 677dbb56f8aa47d5a6c0223d45d638efad354883611da1926e437bf162fe8937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_curran, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:11:22 compute-0 systemd[1]: Started libpod-conmon-677dbb56f8aa47d5a6c0223d45d638efad354883611da1926e437bf162fe8937.scope.
Jan 23 10:11:22 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:11:22 compute-0 podman[250211]: 2026-01-23 10:11:22.81256998 +0000 UTC m=+0.021745274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:11:22 compute-0 podman[250211]: 2026-01-23 10:11:22.909545179 +0000 UTC m=+0.118720473 container init 677dbb56f8aa47d5a6c0223d45d638efad354883611da1926e437bf162fe8937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:11:22 compute-0 podman[250211]: 2026-01-23 10:11:22.918510132 +0000 UTC m=+0.127685396 container start 677dbb56f8aa47d5a6c0223d45d638efad354883611da1926e437bf162fe8937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:11:22 compute-0 podman[250211]: 2026-01-23 10:11:22.922883311 +0000 UTC m=+0.132058615 container attach 677dbb56f8aa47d5a6c0223d45d638efad354883611da1926e437bf162fe8937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_curran, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:11:22 compute-0 vibrant_curran[250227]: 167 167
Jan 23 10:11:22 compute-0 systemd[1]: libpod-677dbb56f8aa47d5a6c0223d45d638efad354883611da1926e437bf162fe8937.scope: Deactivated successfully.
Jan 23 10:11:22 compute-0 podman[250211]: 2026-01-23 10:11:22.927884756 +0000 UTC m=+0.137060030 container died 677dbb56f8aa47d5a6c0223d45d638efad354883611da1926e437bf162fe8937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1f21310d8ef1b55112130ee2752aab685f11ded0d872f6852aec565b8bdaf16-merged.mount: Deactivated successfully.
Jan 23 10:11:22 compute-0 podman[250211]: 2026-01-23 10:11:22.972445398 +0000 UTC m=+0.181620672 container remove 677dbb56f8aa47d5a6c0223d45d638efad354883611da1926e437bf162fe8937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 10:11:22 compute-0 systemd[1]: libpod-conmon-677dbb56f8aa47d5a6c0223d45d638efad354883611da1926e437bf162fe8937.scope: Deactivated successfully.
Jan 23 10:11:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:23 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60004d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:23 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:23 compute-0 podman[250251]: 2026-01-23 10:11:23.154088289 +0000 UTC m=+0.048258695 container create 1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 10:11:23 compute-0 systemd[1]: Started libpod-conmon-1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c.scope.
Jan 23 10:11:23 compute-0 ceph-mon[74335]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:11:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb909a661f3059b8f9bd30d2cfafaa437c34ab3bfccad0f6c7a3f11a53aa1e69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb909a661f3059b8f9bd30d2cfafaa437c34ab3bfccad0f6c7a3f11a53aa1e69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb909a661f3059b8f9bd30d2cfafaa437c34ab3bfccad0f6c7a3f11a53aa1e69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:23 compute-0 podman[250251]: 2026-01-23 10:11:23.130825849 +0000 UTC m=+0.024996285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb909a661f3059b8f9bd30d2cfafaa437c34ab3bfccad0f6c7a3f11a53aa1e69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:11:23 compute-0 podman[250251]: 2026-01-23 10:11:23.236609918 +0000 UTC m=+0.130780344 container init 1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 10:11:23 compute-0 podman[250251]: 2026-01-23 10:11:23.243596002 +0000 UTC m=+0.137766408 container start 1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:11:23 compute-0 podman[250251]: 2026-01-23 10:11:23.247595212 +0000 UTC m=+0.141765618 container attach 1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:11:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:23.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:23.579Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:23.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:23 compute-0 lvm[250343]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:11:23 compute-0 lvm[250343]: VG ceph_vg0 finished
Jan 23 10:11:23 compute-0 pedantic_driscoll[250268]: {}
Jan 23 10:11:23 compute-0 systemd[1]: libpod-1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c.scope: Deactivated successfully.
Jan 23 10:11:23 compute-0 systemd[1]: libpod-1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c.scope: Consumed 1.118s CPU time.
Jan 23 10:11:23 compute-0 podman[250251]: 2026-01-23 10:11:23.949027129 +0000 UTC m=+0.843197535 container died 1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 10:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb909a661f3059b8f9bd30d2cfafaa437c34ab3bfccad0f6c7a3f11a53aa1e69-merged.mount: Deactivated successfully.
Jan 23 10:11:24 compute-0 podman[250251]: 2026-01-23 10:11:24.00076994 +0000 UTC m=+0.894940356 container remove 1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:11:24 compute-0 systemd[1]: libpod-conmon-1c037a4e795fc5c718fddc14ddf46fe5838615895131dcae06b20ca986533b1c.scope: Deactivated successfully.
Jan 23 10:11:24 compute-0 sudo[250145]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:11:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:11:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:24 compute-0 sudo[250361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:11:24 compute-0 sudo[250361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:24 compute-0 sudo[250361]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:24 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:25 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b5000bee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:25 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b60004d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:25 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:25 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:11:25 compute-0 ceph-mon[74335]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 23 10:11:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:25.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 23 10:11:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:25.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:26 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:27 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:27 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b640014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:27.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:27.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:27 compute-0 ceph-mon[74335]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:27.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:11:28 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198864426' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:11:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:11:28 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198864426' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:11:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:28 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1318506093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:11:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1318506093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:11:28 compute-0 ceph-mon[74335]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3198864426' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:11:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3198864426' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:11:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:29 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:29 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:29.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:29.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:29] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:11:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:29] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:11:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:30 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1327855081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:11:30 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1327855081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:11:30 compute-0 podman[250395]: 2026-01-23 10:11:30.589530379 +0000 UTC m=+0.115392790 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:11:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:30 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b640014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:31 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:31 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b640014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:31.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:31.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:32 compute-0 ceph-mon[74335]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:11:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:32 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:33 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:33 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:33 compute-0 ceph-mon[74335]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:11:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:33.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:33.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:33.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:34 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b640014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:11:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:11:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:35 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:35 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:35.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:35.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:35 compute-0 ceph-mon[74335]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:11:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:36 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:37 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b640014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:37 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:37.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:37 compute-0 ceph-mon[74335]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:37.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:38 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:39 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:39 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b640014a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:39.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:39 compute-0 podman[250430]: 2026-01-23 10:11:39.53120522 +0000 UTC m=+0.056611573 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:11:39 compute-0 ceph-mon[74335]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:39.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:39] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:11:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:39] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:11:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:40 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:41 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:41 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 23 10:11:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:41.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 23 10:11:41 compute-0 ceph-mon[74335]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:41 compute-0 sudo[250452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:11:41 compute-0 sudo[250452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:11:41 compute-0 sudo[250452]: pam_unix(sudo:session): session closed for user root
Jan 23 10:11:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:41.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:11:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:42 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:43 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:43 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:43 compute-0 ceph-mon[74335]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:11:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:43.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:43.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:11:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:43.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:11:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:43.582Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:11:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:43.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:44 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:45 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:45 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:45.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:45 compute-0 ceph-mon[74335]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:45.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:46 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:47 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:47.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:47 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:47 compute-0 ceph-mon[74335]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:48 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:48 compute-0 ceph-mon[74335]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:49 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b74009fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:49 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:49.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:49.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:49] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:11:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:49] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:11:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:11:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:11:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:11:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:11:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:11:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:11:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:11:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:11:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:11:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:50 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:51 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b48003610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:51 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b7400b340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:51 compute-0 ceph-mon[74335]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:51.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:11:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:52 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:53 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:53 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:53.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:53.584Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:11:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:53.584Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:11:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:53.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:53 compute-0 ceph-mon[74335]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:11:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:53.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:54 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:55 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:55 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b3c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.003000075s ======
Jan 23 10:11:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:55.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Jan 23 10:11:55 compute-0 ceph-mon[74335]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:11:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:56 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.719 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.719 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.719 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:11:56 compute-0 ceph-mon[74335]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.758 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.759 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.759 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.759 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.759 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.760 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.760 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.760 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.760 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.783 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.783 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.783 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.783 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:11:56 compute-0 nova_compute[249229]: 2026-01-23 10:11:56.784 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:11:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:57 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:11:57.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:11:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:57 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:11:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555214160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:11:57 compute-0 nova_compute[249229]: 2026-01-23 10:11:57.333 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:11:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:57.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:57 compute-0 nova_compute[249229]: 2026-01-23 10:11:57.517 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:11:57 compute-0 nova_compute[249229]: 2026-01-23 10:11:57.518 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4935MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:11:57 compute-0 nova_compute[249229]: 2026-01-23 10:11:57.519 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:11:57 compute-0 nova_compute[249229]: 2026-01-23 10:11:57.519 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:11:57 compute-0 nova_compute[249229]: 2026-01-23 10:11:57.672 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:11:57 compute-0 nova_compute[249229]: 2026-01-23 10:11:57.672 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:11:57 compute-0 nova_compute[249229]: 2026-01-23 10:11:57.782 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:11:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1555214160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:11:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4280706252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:11:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/354272967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:11:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:11:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:57.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:11:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:11:58 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890008546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:11:58 compute-0 nova_compute[249229]: 2026-01-23 10:11:58.216 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:11:58 compute-0 nova_compute[249229]: 2026-01-23 10:11:58.222 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:11:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:58 compute-0 nova_compute[249229]: 2026-01-23 10:11:58.563 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:11:58 compute-0 nova_compute[249229]: 2026-01-23 10:11:58.566 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:11:58 compute-0 nova_compute[249229]: 2026-01-23 10:11:58.566 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:11:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:58 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b3c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3890008546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:11:58 compute-0 ceph-mon[74335]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:11:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3720768701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:11:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:59 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:11:59 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:11:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:59.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:11:59.763 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:11:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:11:59.764 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:11:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:11:59.764 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:11:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:11:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:11:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:59.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:11:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4085484174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:11:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:59] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:11:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:11:59] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:12:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:00 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:00 compute-0 ceph-mon[74335]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:01 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b3c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:01 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:01.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:01 compute-0 podman[250542]: 2026-01-23 10:12:01.555237543 +0000 UTC m=+0.084960391 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 23 10:12:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:12:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:01.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:12:01 compute-0 sudo[250571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:12:01 compute-0 sudo[250571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:01 compute-0 sudo[250571]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:12:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:02 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:03 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:03 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b3c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:03.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:03 compute-0 ceph-mon[74335]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:12:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:03.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:04 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:12:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:12:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:05 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:05 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:05.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:05 compute-0 ceph-mon[74335]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:12:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:05.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:06 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:07 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:07.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:07 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 23 10:12:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:07.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 23 10:12:07 compute-0 ceph-mon[74335]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:07.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:08 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:09 compute-0 ceph-mon[74335]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:09 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:09 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:09.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:09.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:09] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:12:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:09] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:12:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:10 compute-0 podman[250605]: 2026-01-23 10:12:10.526761617 +0000 UTC m=+0.056771248 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 23 10:12:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:10 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:11 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:11 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:11.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:11 compute-0 ceph-mon[74335]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:11.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:12:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:12 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:13 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:13 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b3c002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:13.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:13.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:13.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:14 compute-0 ceph-mon[74335]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:12:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:14 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b3c002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:15 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:15 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:15 compute-0 ceph-mon[74335]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:15.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:15.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:16 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b3c002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:17 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b3c002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:17.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:17 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:17.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:17 compute-0 ceph-mon[74335]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:17.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:18 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:19 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b3c002c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:19 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:19.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:19 compute-0 ceph-mon[74335]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:19.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:19] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:12:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:19] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:12:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:12:19
Jan 23 10:12:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:12:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:12:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['vms', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.nfs', '.rgw.root']
Jan 23 10:12:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:12:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:12:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:12:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:12:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:20 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b44001f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:21 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:21 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b64004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:21.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:21 compute-0 ceph-mon[74335]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:21.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:21 compute-0 sudo[250635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:12:22 compute-0 sudo[250635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:22 compute-0 sudo[250635]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:12:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:22 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[212759]: 23/01/2026 10:12:23 : epoch 69734842 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1b58004760 fd 48 proxy ignored for local
Jan 23 10:12:23 compute-0 kernel: ganesha.nfsd[244469]: segfault at 50 ip 00007f1bf4dc432e sp 00007f1b8cff8210 error 4 in libntirpc.so.5.8[7f1bf4da9000+2c000] likely on CPU 4 (core 0, socket 4)
Jan 23 10:12:23 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 10:12:23 compute-0 systemd[1]: Started Process Core Dump (PID 250661/UID 0).
Jan 23 10:12:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 10:12:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:23.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 10:12:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:23.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:23 compute-0 ceph-mon[74335]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:12:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:23.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:24 compute-0 sudo[250664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:12:24 compute-0 sudo[250664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:24 compute-0 sudo[250664]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:24 compute-0 sudo[250689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:12:24 compute-0 sudo[250689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:24 compute-0 systemd-coredump[250662]: Process 212765 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 67:
                                                    #0  0x00007f1bf4dc432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 10:12:24 compute-0 systemd[1]: systemd-coredump@6-250661-0.service: Deactivated successfully.
Jan 23 10:12:24 compute-0 systemd[1]: systemd-coredump@6-250661-0.service: Consumed 1.490s CPU time.
Jan 23 10:12:24 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:12:24 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:12:24 compute-0 podman[250734]: 2026-01-23 10:12:24.825446874 +0000 UTC m=+0.023453809 container died 1c3f32fbcd628d023aea69847b2e3a97561d6f6a8cf586c68cdfd832d662b66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:12:25 compute-0 sudo[250689]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:12:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:12:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:12:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:12:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:12:25 compute-0 ceph-mon[74335]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-77922a0e14f419b7a4647e59b5eaf47d7581657ddd7a110d0927d4b235dc5db5-merged.mount: Deactivated successfully.
Jan 23 10:12:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:25.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:12:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:12:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:12:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:12:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:12:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:12:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:12:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:12:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:12:25 compute-0 sudo[250765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:12:25 compute-0 sudo[250765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:25 compute-0 sudo[250765]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:25 compute-0 sudo[250790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:12:25 compute-0 sudo[250790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:25.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:25 compute-0 podman[250734]: 2026-01-23 10:12:25.949290827 +0000 UTC m=+1.147297762 container remove 1c3f32fbcd628d023aea69847b2e3a97561d6f6a8cf586c68cdfd832d662b66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:12:25 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 10:12:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:26 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:12:26 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 2.237s CPU time.
Jan 23 10:12:26 compute-0 podman[250880]: 2026-01-23 10:12:26.214545966 +0000 UTC m=+0.053966129 container create c4c3233a99bddc708a101f8b7c2face6c42c768457e544e33fb038fba9f587c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chatelet, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:12:26 compute-0 systemd[1]: Started libpod-conmon-c4c3233a99bddc708a101f8b7c2face6c42c768457e544e33fb038fba9f587c0.scope.
Jan 23 10:12:26 compute-0 podman[250880]: 2026-01-23 10:12:26.187245768 +0000 UTC m=+0.026665941 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:12:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:12:26 compute-0 podman[250880]: 2026-01-23 10:12:26.338144027 +0000 UTC m=+0.177564200 container init c4c3233a99bddc708a101f8b7c2face6c42c768457e544e33fb038fba9f587c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chatelet, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 10:12:26 compute-0 podman[250880]: 2026-01-23 10:12:26.346619889 +0000 UTC m=+0.186040042 container start c4c3233a99bddc708a101f8b7c2face6c42c768457e544e33fb038fba9f587c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chatelet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 10:12:26 compute-0 podman[250880]: 2026-01-23 10:12:26.350260533 +0000 UTC m=+0.189680706 container attach c4c3233a99bddc708a101f8b7c2face6c42c768457e544e33fb038fba9f587c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 10:12:26 compute-0 magical_chatelet[250897]: 167 167
Jan 23 10:12:26 compute-0 systemd[1]: libpod-c4c3233a99bddc708a101f8b7c2face6c42c768457e544e33fb038fba9f587c0.scope: Deactivated successfully.
Jan 23 10:12:26 compute-0 podman[250880]: 2026-01-23 10:12:26.35262083 +0000 UTC m=+0.192041003 container died c4c3233a99bddc708a101f8b7c2face6c42c768457e544e33fb038fba9f587c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 23 10:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3340769684b6236a4056ff932b34ca61d0051df98ef85f322c9d997bacf07fe-merged.mount: Deactivated successfully.
Jan 23 10:12:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:26 compute-0 podman[250880]: 2026-01-23 10:12:26.408277006 +0000 UTC m=+0.247697159 container remove c4c3233a99bddc708a101f8b7c2face6c42c768457e544e33fb038fba9f587c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 10:12:26 compute-0 systemd[1]: libpod-conmon-c4c3233a99bddc708a101f8b7c2face6c42c768457e544e33fb038fba9f587c0.scope: Deactivated successfully.
Jan 23 10:12:26 compute-0 podman[250923]: 2026-01-23 10:12:26.624340312 +0000 UTC m=+0.082000347 container create d77a23b7c14f3dff47d59dcb73e20af8fa0781c8256c6717a1b07497c696c40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swirles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 23 10:12:26 compute-0 podman[250923]: 2026-01-23 10:12:26.566098803 +0000 UTC m=+0.023758858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:12:26 compute-0 systemd[1]: Started libpod-conmon-d77a23b7c14f3dff47d59dcb73e20af8fa0781c8256c6717a1b07497c696c40a.scope.
Jan 23 10:12:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d3eae2118d04f0c07b1e127f0955ef8d613d1d77c723c35f6689b401aad9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d3eae2118d04f0c07b1e127f0955ef8d613d1d77c723c35f6689b401aad9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d3eae2118d04f0c07b1e127f0955ef8d613d1d77c723c35f6689b401aad9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d3eae2118d04f0c07b1e127f0955ef8d613d1d77c723c35f6689b401aad9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d3eae2118d04f0c07b1e127f0955ef8d613d1d77c723c35f6689b401aad9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:26 compute-0 podman[250923]: 2026-01-23 10:12:26.716881629 +0000 UTC m=+0.174541694 container init d77a23b7c14f3dff47d59dcb73e20af8fa0781c8256c6717a1b07497c696c40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:12:26 compute-0 podman[250923]: 2026-01-23 10:12:26.724185028 +0000 UTC m=+0.181845063 container start d77a23b7c14f3dff47d59dcb73e20af8fa0781c8256c6717a1b07497c696c40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swirles, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:12:26 compute-0 podman[250923]: 2026-01-23 10:12:26.727203544 +0000 UTC m=+0.184863599 container attach d77a23b7c14f3dff47d59dcb73e20af8fa0781c8256c6717a1b07497c696c40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swirles, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 10:12:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:12:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:12:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:12:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:12:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:12:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:12:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:12:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:27.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:27 compute-0 friendly_swirles[250939]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:12:27 compute-0 friendly_swirles[250939]: --> All data devices are unavailable
Jan 23 10:12:27 compute-0 systemd[1]: libpod-d77a23b7c14f3dff47d59dcb73e20af8fa0781c8256c6717a1b07497c696c40a.scope: Deactivated successfully.
Jan 23 10:12:27 compute-0 podman[250923]: 2026-01-23 10:12:27.126310896 +0000 UTC m=+0.583970931 container died d77a23b7c14f3dff47d59dcb73e20af8fa0781c8256c6717a1b07497c696c40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 10:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c85d3eae2118d04f0c07b1e127f0955ef8d613d1d77c723c35f6689b401aad9d-merged.mount: Deactivated successfully.
Jan 23 10:12:27 compute-0 podman[250923]: 2026-01-23 10:12:27.172535713 +0000 UTC m=+0.630195748 container remove d77a23b7c14f3dff47d59dcb73e20af8fa0781c8256c6717a1b07497c696c40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swirles, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:12:27 compute-0 systemd[1]: libpod-conmon-d77a23b7c14f3dff47d59dcb73e20af8fa0781c8256c6717a1b07497c696c40a.scope: Deactivated successfully.
Jan 23 10:12:27 compute-0 sudo[250790]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:27 compute-0 sudo[250965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:12:27 compute-0 sudo[250965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:27 compute-0 sudo[250965]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:27 compute-0 sudo[250990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:12:27 compute-0 sudo[250990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:27 compute-0 podman[251056]: 2026-01-23 10:12:27.772848699 +0000 UTC m=+0.047116923 container create 2aa88632aec9558b5c245f316b0ff7f80968acd5bc5575021af876946283f3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 10:12:27 compute-0 systemd[1]: Started libpod-conmon-2aa88632aec9558b5c245f316b0ff7f80968acd5bc5575021af876946283f3a7.scope.
Jan 23 10:12:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:12:27 compute-0 podman[251056]: 2026-01-23 10:12:27.747696192 +0000 UTC m=+0.021964436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:12:27 compute-0 podman[251056]: 2026-01-23 10:12:27.849936676 +0000 UTC m=+0.124204920 container init 2aa88632aec9558b5c245f316b0ff7f80968acd5bc5575021af876946283f3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:12:27 compute-0 podman[251056]: 2026-01-23 10:12:27.856240805 +0000 UTC m=+0.130509039 container start 2aa88632aec9558b5c245f316b0ff7f80968acd5bc5575021af876946283f3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:12:27 compute-0 laughing_haslett[251073]: 167 167
Jan 23 10:12:27 compute-0 systemd[1]: libpod-2aa88632aec9558b5c245f316b0ff7f80968acd5bc5575021af876946283f3a7.scope: Deactivated successfully.
Jan 23 10:12:27 compute-0 podman[251056]: 2026-01-23 10:12:27.862428212 +0000 UTC m=+0.136696456 container attach 2aa88632aec9558b5c245f316b0ff7f80968acd5bc5575021af876946283f3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 10:12:27 compute-0 ceph-mon[74335]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:27 compute-0 podman[251056]: 2026-01-23 10:12:27.8627113 +0000 UTC m=+0.136979524 container died 2aa88632aec9558b5c245f316b0ff7f80968acd5bc5575021af876946283f3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8e192f95da99b60d6b72c1a2b8d1d21c09edde192c4904ea37e4a8b9ce9d16a-merged.mount: Deactivated successfully.
Jan 23 10:12:27 compute-0 podman[251056]: 2026-01-23 10:12:27.901598738 +0000 UTC m=+0.175866962 container remove 2aa88632aec9558b5c245f316b0ff7f80968acd5bc5575021af876946283f3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 10:12:27 compute-0 systemd[1]: libpod-conmon-2aa88632aec9558b5c245f316b0ff7f80968acd5bc5575021af876946283f3a7.scope: Deactivated successfully.
Jan 23 10:12:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:27.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101228 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:12:28 compute-0 podman[251096]: 2026-01-23 10:12:28.096289076 +0000 UTC m=+0.047143675 container create c63af0c7c1f1ede11735db40f758453b43fdcca8ebf403b2f24a873b396bf76a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:12:28 compute-0 systemd[1]: Started libpod-conmon-c63af0c7c1f1ede11735db40f758453b43fdcca8ebf403b2f24a873b396bf76a.scope.
Jan 23 10:12:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64cf335908b1b51086ac31d54ba95cec12efc0d5d049a6765e227a1a4afbf65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64cf335908b1b51086ac31d54ba95cec12efc0d5d049a6765e227a1a4afbf65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64cf335908b1b51086ac31d54ba95cec12efc0d5d049a6765e227a1a4afbf65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64cf335908b1b51086ac31d54ba95cec12efc0d5d049a6765e227a1a4afbf65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:28 compute-0 podman[251096]: 2026-01-23 10:12:28.076461951 +0000 UTC m=+0.027316560 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:12:28 compute-0 podman[251096]: 2026-01-23 10:12:28.191731745 +0000 UTC m=+0.142586354 container init c63af0c7c1f1ede11735db40f758453b43fdcca8ebf403b2f24a873b396bf76a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:12:28 compute-0 podman[251096]: 2026-01-23 10:12:28.199012913 +0000 UTC m=+0.149867492 container start c63af0c7c1f1ede11735db40f758453b43fdcca8ebf403b2f24a873b396bf76a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pike, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 10:12:28 compute-0 podman[251096]: 2026-01-23 10:12:28.202096311 +0000 UTC m=+0.152950890 container attach c63af0c7c1f1ede11735db40f758453b43fdcca8ebf403b2f24a873b396bf76a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 10:12:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:28 compute-0 focused_pike[251112]: {
Jan 23 10:12:28 compute-0 focused_pike[251112]:     "1": [
Jan 23 10:12:28 compute-0 focused_pike[251112]:         {
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "devices": [
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "/dev/loop3"
Jan 23 10:12:28 compute-0 focused_pike[251112]:             ],
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "lv_name": "ceph_lv0",
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "lv_size": "21470642176",
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "name": "ceph_lv0",
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "tags": {
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.cluster_name": "ceph",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.crush_device_class": "",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.encrypted": "0",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.osd_id": "1",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.type": "block",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.vdo": "0",
Jan 23 10:12:28 compute-0 focused_pike[251112]:                 "ceph.with_tpm": "0"
Jan 23 10:12:28 compute-0 focused_pike[251112]:             },
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "type": "block",
Jan 23 10:12:28 compute-0 focused_pike[251112]:             "vg_name": "ceph_vg0"
Jan 23 10:12:28 compute-0 focused_pike[251112]:         }
Jan 23 10:12:28 compute-0 focused_pike[251112]:     ]
Jan 23 10:12:28 compute-0 focused_pike[251112]: }
Jan 23 10:12:28 compute-0 systemd[1]: libpod-c63af0c7c1f1ede11735db40f758453b43fdcca8ebf403b2f24a873b396bf76a.scope: Deactivated successfully.
Jan 23 10:12:28 compute-0 podman[251096]: 2026-01-23 10:12:28.501230874 +0000 UTC m=+0.452085513 container died c63af0c7c1f1ede11735db40f758453b43fdcca8ebf403b2f24a873b396bf76a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f64cf335908b1b51086ac31d54ba95cec12efc0d5d049a6765e227a1a4afbf65-merged.mount: Deactivated successfully.
Jan 23 10:12:28 compute-0 podman[251096]: 2026-01-23 10:12:28.546048671 +0000 UTC m=+0.496903250 container remove c63af0c7c1f1ede11735db40f758453b43fdcca8ebf403b2f24a873b396bf76a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pike, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:12:28 compute-0 systemd[1]: libpod-conmon-c63af0c7c1f1ede11735db40f758453b43fdcca8ebf403b2f24a873b396bf76a.scope: Deactivated successfully.
Jan 23 10:12:28 compute-0 sudo[250990]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:28 compute-0 sudo[251134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:12:28 compute-0 sudo[251134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:28 compute-0 sudo[251134]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:28 compute-0 sudo[251159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:12:28 compute-0 sudo[251159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101229 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:12:29 compute-0 ceph-mon[74335]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:29 compute-0 podman[251224]: 2026-01-23 10:12:29.121770186 +0000 UTC m=+0.020457474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:12:29 compute-0 podman[251224]: 2026-01-23 10:12:29.399080238 +0000 UTC m=+0.297767516 container create 2ce50695f52e2090eea105f0ba5145d4f44823cd315c6db0daf8e3094abcb3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 23 10:12:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:29 compute-0 systemd[1]: Started libpod-conmon-2ce50695f52e2090eea105f0ba5145d4f44823cd315c6db0daf8e3094abcb3f0.scope.
Jan 23 10:12:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:12:29 compute-0 podman[251224]: 2026-01-23 10:12:29.850572113 +0000 UTC m=+0.749259391 container init 2ce50695f52e2090eea105f0ba5145d4f44823cd315c6db0daf8e3094abcb3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:12:29 compute-0 podman[251224]: 2026-01-23 10:12:29.857453789 +0000 UTC m=+0.756141047 container start 2ce50695f52e2090eea105f0ba5145d4f44823cd315c6db0daf8e3094abcb3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:12:29 compute-0 goofy_nightingale[251242]: 167 167
Jan 23 10:12:29 compute-0 systemd[1]: libpod-2ce50695f52e2090eea105f0ba5145d4f44823cd315c6db0daf8e3094abcb3f0.scope: Deactivated successfully.
Jan 23 10:12:29 compute-0 podman[251224]: 2026-01-23 10:12:29.862438661 +0000 UTC m=+0.761125959 container attach 2ce50695f52e2090eea105f0ba5145d4f44823cd315c6db0daf8e3094abcb3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:12:29 compute-0 podman[251224]: 2026-01-23 10:12:29.86275713 +0000 UTC m=+0.761444388 container died 2ce50695f52e2090eea105f0ba5145d4f44823cd315c6db0daf8e3094abcb3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_nightingale, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Jan 23 10:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-015fb24ef828542eca8ffc37c46c5b3b6add72a7c3346a143eb518327d5f73f1-merged.mount: Deactivated successfully.
Jan 23 10:12:29 compute-0 podman[251224]: 2026-01-23 10:12:29.901305239 +0000 UTC m=+0.799992507 container remove 2ce50695f52e2090eea105f0ba5145d4f44823cd315c6db0daf8e3094abcb3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_nightingale, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:12:29 compute-0 systemd[1]: libpod-conmon-2ce50695f52e2090eea105f0ba5145d4f44823cd315c6db0daf8e3094abcb3f0.scope: Deactivated successfully.
Jan 23 10:12:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:29.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:29] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:12:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:29] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:12:30 compute-0 podman[251264]: 2026-01-23 10:12:30.068867173 +0000 UTC m=+0.045452226 container create 99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 23 10:12:30 compute-0 systemd[1]: Started libpod-conmon-99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e.scope.
Jan 23 10:12:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b084f1982967d675b064f3ef5cea44d566af7905491d5d1b02d706b3bacaa16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b084f1982967d675b064f3ef5cea44d566af7905491d5d1b02d706b3bacaa16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b084f1982967d675b064f3ef5cea44d566af7905491d5d1b02d706b3bacaa16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b084f1982967d675b064f3ef5cea44d566af7905491d5d1b02d706b3bacaa16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:30 compute-0 podman[251264]: 2026-01-23 10:12:30.049692117 +0000 UTC m=+0.026277200 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:12:30 compute-0 podman[251264]: 2026-01-23 10:12:30.151869819 +0000 UTC m=+0.128454892 container init 99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_dewdney, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:12:30 compute-0 podman[251264]: 2026-01-23 10:12:30.159832826 +0000 UTC m=+0.136417879 container start 99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_dewdney, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 10:12:30 compute-0 podman[251264]: 2026-01-23 10:12:30.16419177 +0000 UTC m=+0.140776853 container attach 99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_dewdney, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:12:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:30 compute-0 lvm[251356]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:12:30 compute-0 lvm[251356]: VG ceph_vg0 finished
Jan 23 10:12:30 compute-0 confident_dewdney[251281]: {}
Jan 23 10:12:30 compute-0 systemd[1]: libpod-99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e.scope: Deactivated successfully.
Jan 23 10:12:30 compute-0 systemd[1]: libpod-99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e.scope: Consumed 1.113s CPU time.
Jan 23 10:12:30 compute-0 podman[251360]: 2026-01-23 10:12:30.940173381 +0000 UTC m=+0.025437126 container died 99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_dewdney, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b084f1982967d675b064f3ef5cea44d566af7905491d5d1b02d706b3bacaa16-merged.mount: Deactivated successfully.
Jan 23 10:12:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:30 compute-0 podman[251360]: 2026-01-23 10:12:30.983233788 +0000 UTC m=+0.068497503 container remove 99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:12:30 compute-0 systemd[1]: libpod-conmon-99184b60bf8622ecc9372a4fa4312218921ddfeadbd2d1ca9cd8ba785dd58e1e.scope: Deactivated successfully.
Jan 23 10:12:31 compute-0 sudo[251159]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:12:31 compute-0 ceph-mon[74335]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:12:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:31 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:12:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:12:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:31.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:12:32 compute-0 sudo[251376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:12:32 compute-0 sudo[251376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:32 compute-0 sudo[251376]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:12:32 compute-0 podman[251400]: 2026-01-23 10:12:32.505284368 +0000 UTC m=+0.136858511 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:12:32 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:12:32 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:12:32 compute-0 ceph-mon[74335]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:12:33 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.24496 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 10:12:33 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 10:12:33 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 10:12:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 23 10:12:33 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987316555' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 23 10:12:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:33.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:33 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14898 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 10:12:33 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 10:12:33 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 10:12:33 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14898 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 23 10:12:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:33.588Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:33.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:12:34 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3457274779' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 23 10:12:34 compute-0 ceph-mon[74335]: from='client.24496 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 10:12:34 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1987316555' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 23 10:12:34 compute-0 ceph-mon[74335]: from='client.14898 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 10:12:34 compute-0 ceph-mon[74335]: from='client.14898 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 23 10:12:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:12:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:12:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:35.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:35 compute-0 ceph-mon[74335]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 23 10:12:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:12:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:35.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:36 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 7.
Jan 23 10:12:36 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:12:36 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 2.237s CPU time.
Jan 23 10:12:36 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 10:12:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 85 B/s wr, 114 op/s
Jan 23 10:12:36 compute-0 podman[251480]: 2026-01-23 10:12:36.387139801 +0000 UTC m=+0.025003654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:12:36 compute-0 podman[251480]: 2026-01-23 10:12:36.552455911 +0000 UTC m=+0.190319734 container create 0fddaa8774d77ce08f23ec7c205e86e5445782b9a1aaa38f07872d28a02d4d5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:12:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:37.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477d1c01d34cd065bda3a294d6711bbddf8bdb37fff4b81be92f1bc09e11c35d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477d1c01d34cd065bda3a294d6711bbddf8bdb37fff4b81be92f1bc09e11c35d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477d1c01d34cd065bda3a294d6711bbddf8bdb37fff4b81be92f1bc09e11c35d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477d1c01d34cd065bda3a294d6711bbddf8bdb37fff4b81be92f1bc09e11c35d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:12:37 compute-0 podman[251480]: 2026-01-23 10:12:37.363146972 +0000 UTC m=+1.001010845 container init 0fddaa8774d77ce08f23ec7c205e86e5445782b9a1aaa38f07872d28a02d4d5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 10:12:37 compute-0 podman[251480]: 2026-01-23 10:12:37.368569257 +0000 UTC m=+1.006433080 container start 0fddaa8774d77ce08f23ec7c205e86e5445782b9a1aaa38f07872d28a02d4d5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:12:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 10:12:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 10:12:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:37.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:37 compute-0 ceph-mon[74335]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 85 B/s wr, 114 op/s
Jan 23 10:12:37 compute-0 bash[251480]: 0fddaa8774d77ce08f23ec7c205e86e5445782b9a1aaa38f07872d28a02d4d5a
Jan 23 10:12:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 10:12:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 10:12:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 10:12:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 10:12:37 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:12:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 10:12:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:12:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:37.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 85 B/s wr, 114 op/s
Jan 23 10:12:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:39.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:39 compute-0 ceph-mon[74335]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 85 B/s wr, 114 op/s
Jan 23 10:12:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:39.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:39] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:12:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:39] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:12:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 85 B/s wr, 114 op/s
Jan 23 10:12:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:41.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:41 compute-0 podman[251542]: 2026-01-23 10:12:41.549733577 +0000 UTC m=+0.078477457 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:12:41 compute-0 ceph-mon[74335]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 85 B/s wr, 114 op/s
Jan 23 10:12:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:41.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:42 compute-0 sudo[251563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:12:42 compute-0 sudo[251563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:12:42 compute-0 sudo[251563]: pam_unix(sudo:session): session closed for user root
Jan 23 10:12:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 938 B/s wr, 134 op/s
Jan 23 10:12:42 compute-0 ceph-mon[74335]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 938 B/s wr, 134 op/s
Jan 23 10:12:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:43.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:43.589Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 23 10:12:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 23 10:12:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:12:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:12:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 23 10:12:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:43.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 938 B/s wr, 134 op/s
Jan 23 10:12:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:45.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:45 compute-0 ceph-mon[74335]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 938 B/s wr, 134 op/s
Jan 23 10:12:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:45.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:12:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:12:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:12:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 23 10:12:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:12:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:12:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:12:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 1023 B/s wr, 182 op/s
Jan 23 10:12:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:47.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:47.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:47 compute-0 ceph-mon[74335]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 1023 B/s wr, 182 op/s
Jan 23 10:12:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 938 B/s wr, 67 op/s
Jan 23 10:12:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:12:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3459287412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:12:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:12:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3459287412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:12:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:49.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:49 compute-0 ceph-mon[74335]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 938 B/s wr, 67 op/s
Jan 23 10:12:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3459287412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:12:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3459287412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:12:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:49] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:12:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:49] "GET /metrics HTTP/1.1" 200 48426 "" "Prometheus/2.51.0"
Jan 23 10:12:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:49.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:12:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101250 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:12:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:12:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:12:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:12:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:12:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:12:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:12:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 938 B/s wr, 67 op/s
Jan 23 10:12:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001b:nfs.cephfs.2: -2
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 10:12:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:12:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:51 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:51 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e74001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:51.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 23 10:12:51 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/272537903' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 23 10:12:51 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.14940 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 10:12:51 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 10:12:51 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 10:12:51 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.24536 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 10:12:51 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 10:12:51 compute-0 ceph-mgr[74633]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 23 10:12:51 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.24536 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 23 10:12:51 compute-0 ceph-mon[74335]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 938 B/s wr, 67 op/s
Jan 23 10:12:51 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/272537903' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 23 10:12:51 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3119665311' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 23 10:12:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:51.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.7 KiB/s wr, 70 op/s
Jan 23 10:12:52 compute-0 ceph-mon[74335]: from='client.14940 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 10:12:52 compute-0 ceph-mon[74335]: from='client.24536 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 23 10:12:52 compute-0 ceph-mon[74335]: from='client.24536 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 23 10:12:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:52 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e74001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101253 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:12:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:53 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:53 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:53.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:53.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:12:53 compute-0 ceph-mon[74335]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.7 KiB/s wr, 70 op/s
Jan 23 10:12:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:53.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 852 B/s wr, 50 op/s
Jan 23 10:12:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:54 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:55 compute-0 ceph-mon[74335]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 852 B/s wr, 50 op/s
Jan 23 10:12:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:55 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:55 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:55.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:12:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:55.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:12:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:12:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 852 B/s wr, 50 op/s
Jan 23 10:12:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:56 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:12:57.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:12:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:57 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:57 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:57.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:57 compute-0 ceph-mon[74335]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 852 B/s wr, 50 op/s
Jan 23 10:12:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:12:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:57.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:12:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.556 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.556 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.577 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.578 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.578 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.600 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.600 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.601 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.601 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.601 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.601 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.601 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.620 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.621 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.621 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.621 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:12:58 compute-0 nova_compute[249229]: 2026-01-23 10:12:58.622 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:12:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:58 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:59 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:12:59 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:12:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:12:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/811842954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.168 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.305 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.306 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4901MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.306 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.306 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.365 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.366 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.383 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:12:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:59.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:12:59.765 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:12:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:12:59.766 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:12:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:12:59.766 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:12:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:12:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2632395150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.851 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.856 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.889 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.890 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:12:59 compute-0 nova_compute[249229]: 2026-01-23 10:12:59.890 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:12:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:59] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Jan 23 10:12:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:12:59] "GET /metrics HTTP/1.1" 200 48431 "" "Prometheus/2.51.0"
Jan 23 10:12:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:12:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:12:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:59.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:12:59 compute-0 ceph-mon[74335]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Jan 23 10:12:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/811842954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:12:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/190810241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:13:00 compute-0 nova_compute[249229]: 2026-01-23 10:13:00.005 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:00 compute-0 nova_compute[249229]: 2026-01-23 10:13:00.006 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Jan 23 10:13:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:00 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2632395150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:13:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/346920199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:13:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/507459087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:13:01 compute-0 ceph-mon[74335]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Jan 23 10:13:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:01 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:01 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:13:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:01.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:13:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:13:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:01.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:13:02 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/814723742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:13:02 compute-0 sudo[251667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:13:02 compute-0 sudo[251667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:02 compute-0 sudo[251667]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 23 10:13:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:02 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:03 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:03 compute-0 ceph-mon[74335]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 23 10:13:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:03 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:03.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:03 compute-0 podman[251693]: 2026-01-23 10:13:03.583813792 +0000 UTC m=+0.108430481 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 23 10:13:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:03.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:03.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:04 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:13:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:13:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:05 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:05 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:05.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:05 compute-0 ceph-mon[74335]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:13:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:05.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:06 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:07.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:13:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:07.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:13:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:07.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:13:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:07 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:07 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:07 compute-0 ceph-mon[74335]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:07.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:08 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:09 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:09 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:09.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:09 compute-0 ceph-mon[74335]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:09] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:13:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:09] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:13:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:13:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:09.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:13:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:10 compute-0 ceph-mon[74335]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:10 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:11 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:11 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:11.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:11.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:13:12 compute-0 podman[251729]: 2026-01-23 10:13:12.521230801 +0000 UTC m=+0.054287658 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 23 10:13:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:12 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:13 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:13 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:13:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:13.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:13:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:13.593Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:13 compute-0 ceph-mon[74335]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:13:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:14.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:14 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:14 compute-0 ceph-mon[74335]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:15 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:15 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:13:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:13:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:16.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:16 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:17.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:17 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:17 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:17 compute-0 ceph-mon[74335]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:17.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:13:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:18.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:13:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101318 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:13:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:18 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:19 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:19 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:19 compute-0 ceph-mon[74335]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:19.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:19] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:13:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:19] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:13:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:13:19
Jan 23 10:13:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:13:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:13:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['volumes', 'images', '.mgr', 'cephfs.cephfs.meta', '.nfs', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms']
Jan 23 10:13:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:13:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:13:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:20.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:13:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:13:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:13:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:13:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:20 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:21 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:21 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:21.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:21 compute-0 ceph-mon[74335]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:22.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:22 compute-0 sudo[251757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:13:22 compute-0 sudo[251757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:22 compute-0 sudo[251757]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:13:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:22 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:23 compute-0 ceph-mon[74335]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:13:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:23 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:23 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:23.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:23.594Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:24.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:13:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:24 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:25 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:25 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:25.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:25 compute-0 ceph-mon[74335]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:13:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:26.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:13:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:26 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:27 compute-0 ceph-mon[74335]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:13:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:27.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:27 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:27 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:27.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:27 : epoch 69734995 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:13:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:28.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:13:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:28 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:29 compute-0 ceph-mon[74335]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:13:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:29 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:29 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:29] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:13:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:29] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:13:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:30.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:13:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:30 : epoch 69734995 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:13:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:30 : epoch 69734995 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:13:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:30 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:31 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:31 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:31 compute-0 ceph-mon[74335]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:13:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:32.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Jan 23 10:13:32 compute-0 sudo[251795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:13:32 compute-0 sudo[251795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:32 compute-0 sudo[251795]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:32 compute-0 sudo[251820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:13:32 compute-0 sudo[251820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:32 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:33 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:33 compute-0 ceph-mon[74335]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Jan 23 10:13:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:33 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:33 compute-0 sudo[251820]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:13:33 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:13:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:13:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:13:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:13:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:33.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:13:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:13:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:34.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:34 : epoch 69734995 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:13:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:13:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:13:34 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:13:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:13:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:13:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:13:34 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:13:34 compute-0 sudo[251877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:13:34 compute-0 sudo[251877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:34 compute-0 sudo[251877]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:34 compute-0 sudo[251908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:13:34 compute-0 sudo[251908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Jan 23 10:13:34 compute-0 podman[251901]: 2026-01-23 10:13:34.472171822 +0000 UTC m=+0.122304420 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:13:34 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:13:34 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:13:34 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:13:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:34 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:34 compute-0 podman[251994]: 2026-01-23 10:13:34.814008105 +0000 UTC m=+0.046407396 container create 6e43f6002c7ea0a946ad7ae3d68f16c1b4bdcffda059082bc0978ed74ffac8b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:13:34 compute-0 systemd[1]: Started libpod-conmon-6e43f6002c7ea0a946ad7ae3d68f16c1b4bdcffda059082bc0978ed74ffac8b7.scope.
Jan 23 10:13:34 compute-0 podman[251994]: 2026-01-23 10:13:34.792455099 +0000 UTC m=+0.024854320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:13:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:13:34 compute-0 podman[251994]: 2026-01-23 10:13:34.935526192 +0000 UTC m=+0.167925423 container init 6e43f6002c7ea0a946ad7ae3d68f16c1b4bdcffda059082bc0978ed74ffac8b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_black, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 10:13:34 compute-0 podman[251994]: 2026-01-23 10:13:34.945018469 +0000 UTC m=+0.177417670 container start 6e43f6002c7ea0a946ad7ae3d68f16c1b4bdcffda059082bc0978ed74ffac8b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:13:34 compute-0 priceless_black[252010]: 167 167
Jan 23 10:13:34 compute-0 systemd[1]: libpod-6e43f6002c7ea0a946ad7ae3d68f16c1b4bdcffda059082bc0978ed74ffac8b7.scope: Deactivated successfully.
Jan 23 10:13:34 compute-0 podman[251994]: 2026-01-23 10:13:34.952066397 +0000 UTC m=+0.184465598 container attach 6e43f6002c7ea0a946ad7ae3d68f16c1b4bdcffda059082bc0978ed74ffac8b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:13:34 compute-0 podman[251994]: 2026-01-23 10:13:34.953833037 +0000 UTC m=+0.186232238 container died 6e43f6002c7ea0a946ad7ae3d68f16c1b4bdcffda059082bc0978ed74ffac8b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_black, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a88c1fb6713ea6fdba81d613a75d09407d4996423410c027f035105a5380b13-merged.mount: Deactivated successfully.
Jan 23 10:13:35 compute-0 podman[251994]: 2026-01-23 10:13:35.003143274 +0000 UTC m=+0.235542475 container remove 6e43f6002c7ea0a946ad7ae3d68f16c1b4bdcffda059082bc0978ed74ffac8b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 10:13:35 compute-0 systemd[1]: libpod-conmon-6e43f6002c7ea0a946ad7ae3d68f16c1b4bdcffda059082bc0978ed74ffac8b7.scope: Deactivated successfully.
Jan 23 10:13:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:13:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:13:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:35 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:35 compute-0 podman[252033]: 2026-01-23 10:13:35.167853465 +0000 UTC m=+0.047284300 container create b352852ce6f9905a8d803a10be1eb16f94e5ad5d0d36b0f6ffe44c2b10acf400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:13:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:35 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:35 compute-0 systemd[1]: Started libpod-conmon-b352852ce6f9905a8d803a10be1eb16f94e5ad5d0d36b0f6ffe44c2b10acf400.scope.
Jan 23 10:13:35 compute-0 podman[252033]: 2026-01-23 10:13:35.146488955 +0000 UTC m=+0.025919810 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:13:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc4976640ed5ab47a21e59e6d828a3213f65a2f5bbf9612e7815b6a2a726a84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc4976640ed5ab47a21e59e6d828a3213f65a2f5bbf9612e7815b6a2a726a84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc4976640ed5ab47a21e59e6d828a3213f65a2f5bbf9612e7815b6a2a726a84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc4976640ed5ab47a21e59e6d828a3213f65a2f5bbf9612e7815b6a2a726a84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc4976640ed5ab47a21e59e6d828a3213f65a2f5bbf9612e7815b6a2a726a84/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:35 compute-0 podman[252033]: 2026-01-23 10:13:35.275565274 +0000 UTC m=+0.154996129 container init b352852ce6f9905a8d803a10be1eb16f94e5ad5d0d36b0f6ffe44c2b10acf400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lovelace, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:13:35 compute-0 podman[252033]: 2026-01-23 10:13:35.284438764 +0000 UTC m=+0.163869589 container start b352852ce6f9905a8d803a10be1eb16f94e5ad5d0d36b0f6ffe44c2b10acf400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:13:35 compute-0 podman[252033]: 2026-01-23 10:13:35.287450078 +0000 UTC m=+0.166881273 container attach b352852ce6f9905a8d803a10be1eb16f94e5ad5d0d36b0f6ffe44c2b10acf400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:13:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:35.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:35 compute-0 thirsty_lovelace[252049]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:13:35 compute-0 thirsty_lovelace[252049]: --> All data devices are unavailable
Jan 23 10:13:35 compute-0 systemd[1]: libpod-b352852ce6f9905a8d803a10be1eb16f94e5ad5d0d36b0f6ffe44c2b10acf400.scope: Deactivated successfully.
Jan 23 10:13:35 compute-0 podman[252033]: 2026-01-23 10:13:35.641126014 +0000 UTC m=+0.520556829 container died b352852ce6f9905a8d803a10be1eb16f94e5ad5d0d36b0f6ffe44c2b10acf400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dc4976640ed5ab47a21e59e6d828a3213f65a2f5bbf9612e7815b6a2a726a84-merged.mount: Deactivated successfully.
Jan 23 10:13:35 compute-0 podman[252033]: 2026-01-23 10:13:35.686249023 +0000 UTC m=+0.565679848 container remove b352852ce6f9905a8d803a10be1eb16f94e5ad5d0d36b0f6ffe44c2b10acf400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:13:35 compute-0 systemd[1]: libpod-conmon-b352852ce6f9905a8d803a10be1eb16f94e5ad5d0d36b0f6ffe44c2b10acf400.scope: Deactivated successfully.
Jan 23 10:13:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:13:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:13:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:13:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:13:35 compute-0 ceph-mon[74335]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Jan 23 10:13:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:13:35 compute-0 sudo[251908]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:35 compute-0 sudo[252076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:13:35 compute-0 sudo[252076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:35 compute-0 sudo[252076]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:35 compute-0 sudo[252102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:13:35 compute-0 sudo[252102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:36.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:36 compute-0 podman[252168]: 2026-01-23 10:13:36.240022866 +0000 UTC m=+0.044464972 container create e9ff64cc77cc382c32bf64f2ac2116f552247d3ed0fae5214ab12fcd6a08cc43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 10:13:36 compute-0 systemd[1]: Started libpod-conmon-e9ff64cc77cc382c32bf64f2ac2116f552247d3ed0fae5214ab12fcd6a08cc43.scope.
Jan 23 10:13:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:13:36 compute-0 podman[252168]: 2026-01-23 10:13:36.314559891 +0000 UTC m=+0.119001997 container init e9ff64cc77cc382c32bf64f2ac2116f552247d3ed0fae5214ab12fcd6a08cc43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 10:13:36 compute-0 podman[252168]: 2026-01-23 10:13:36.225102816 +0000 UTC m=+0.029544932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:13:36 compute-0 podman[252168]: 2026-01-23 10:13:36.322121894 +0000 UTC m=+0.126563990 container start e9ff64cc77cc382c32bf64f2ac2116f552247d3ed0fae5214ab12fcd6a08cc43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:13:36 compute-0 podman[252168]: 2026-01-23 10:13:36.325089098 +0000 UTC m=+0.129531214 container attach e9ff64cc77cc382c32bf64f2ac2116f552247d3ed0fae5214ab12fcd6a08cc43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:13:36 compute-0 clever_benz[252184]: 167 167
Jan 23 10:13:36 compute-0 systemd[1]: libpod-e9ff64cc77cc382c32bf64f2ac2116f552247d3ed0fae5214ab12fcd6a08cc43.scope: Deactivated successfully.
Jan 23 10:13:36 compute-0 podman[252168]: 2026-01-23 10:13:36.326850617 +0000 UTC m=+0.131292743 container died e9ff64cc77cc382c32bf64f2ac2116f552247d3ed0fae5214ab12fcd6a08cc43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 23 10:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ad4cab863522682eadfe4089a6c5028d4d783df486c1dbe9bed935c58c29391-merged.mount: Deactivated successfully.
Jan 23 10:13:36 compute-0 podman[252168]: 2026-01-23 10:13:36.360703079 +0000 UTC m=+0.165145175 container remove e9ff64cc77cc382c32bf64f2ac2116f552247d3ed0fae5214ab12fcd6a08cc43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:13:36 compute-0 systemd[1]: libpod-conmon-e9ff64cc77cc382c32bf64f2ac2116f552247d3ed0fae5214ab12fcd6a08cc43.scope: Deactivated successfully.
Jan 23 10:13:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:13:36 compute-0 podman[252208]: 2026-01-23 10:13:36.528818607 +0000 UTC m=+0.041307863 container create aae61fab12d32e1ff5ae8695ffeac4b47168b85e1900159e6e2ba57058c6868d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:13:36 compute-0 systemd[1]: Started libpod-conmon-aae61fab12d32e1ff5ae8695ffeac4b47168b85e1900159e6e2ba57058c6868d.scope.
Jan 23 10:13:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382b1d4bb266e1db6f43b6588360362c4e9d66f7b142e1ae0df1abee211a1af3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382b1d4bb266e1db6f43b6588360362c4e9d66f7b142e1ae0df1abee211a1af3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382b1d4bb266e1db6f43b6588360362c4e9d66f7b142e1ae0df1abee211a1af3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382b1d4bb266e1db6f43b6588360362c4e9d66f7b142e1ae0df1abee211a1af3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:36 compute-0 podman[252208]: 2026-01-23 10:13:36.511649764 +0000 UTC m=+0.024139040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:13:36 compute-0 podman[252208]: 2026-01-23 10:13:36.61856756 +0000 UTC m=+0.131056836 container init aae61fab12d32e1ff5ae8695ffeac4b47168b85e1900159e6e2ba57058c6868d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:13:36 compute-0 podman[252208]: 2026-01-23 10:13:36.626254997 +0000 UTC m=+0.138744253 container start aae61fab12d32e1ff5ae8695ffeac4b47168b85e1900159e6e2ba57058c6868d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:13:36 compute-0 podman[252208]: 2026-01-23 10:13:36.631490974 +0000 UTC m=+0.143980250 container attach aae61fab12d32e1ff5ae8695ffeac4b47168b85e1900159e6e2ba57058c6868d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:13:36 compute-0 ceph-mon[74335]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:13:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:36 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]: {
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:     "1": [
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:         {
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "devices": [
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "/dev/loop3"
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             ],
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "lv_name": "ceph_lv0",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "lv_size": "21470642176",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "name": "ceph_lv0",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "tags": {
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.cluster_name": "ceph",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.crush_device_class": "",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.encrypted": "0",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.osd_id": "1",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.type": "block",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.vdo": "0",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:                 "ceph.with_tpm": "0"
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             },
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "type": "block",
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:             "vg_name": "ceph_vg0"
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:         }
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]:     ]
Jan 23 10:13:36 compute-0 vigorous_kirch[252224]: }
Jan 23 10:13:36 compute-0 systemd[1]: libpod-aae61fab12d32e1ff5ae8695ffeac4b47168b85e1900159e6e2ba57058c6868d.scope: Deactivated successfully.
Jan 23 10:13:36 compute-0 podman[252208]: 2026-01-23 10:13:36.935099451 +0000 UTC m=+0.447588737 container died aae61fab12d32e1ff5ae8695ffeac4b47168b85e1900159e6e2ba57058c6868d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-382b1d4bb266e1db6f43b6588360362c4e9d66f7b142e1ae0df1abee211a1af3-merged.mount: Deactivated successfully.
Jan 23 10:13:36 compute-0 podman[252208]: 2026-01-23 10:13:36.985677673 +0000 UTC m=+0.498166929 container remove aae61fab12d32e1ff5ae8695ffeac4b47168b85e1900159e6e2ba57058c6868d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_kirch, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 10:13:36 compute-0 systemd[1]: libpod-conmon-aae61fab12d32e1ff5ae8695ffeac4b47168b85e1900159e6e2ba57058c6868d.scope: Deactivated successfully.
Jan 23 10:13:37 compute-0 sudo[252102]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:37.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:37 compute-0 sudo[252245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:13:37 compute-0 sudo[252245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:37 compute-0 sudo[252245]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:37 compute-0 sudo[252270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:13:37 compute-0 sudo[252270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:13:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:37.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:13:37 compute-0 podman[252335]: 2026-01-23 10:13:37.543144999 +0000 UTC m=+0.035990643 container create 27fa76d474e55b58fc1b312583d718d2983e464fd9ebd581a1adf34b129e44b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:13:37 compute-0 systemd[1]: Started libpod-conmon-27fa76d474e55b58fc1b312583d718d2983e464fd9ebd581a1adf34b129e44b6.scope.
Jan 23 10:13:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:13:37 compute-0 podman[252335]: 2026-01-23 10:13:37.52718471 +0000 UTC m=+0.020030374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:13:37 compute-0 podman[252335]: 2026-01-23 10:13:37.690266136 +0000 UTC m=+0.183111800 container init 27fa76d474e55b58fc1b312583d718d2983e464fd9ebd581a1adf34b129e44b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:13:37 compute-0 podman[252335]: 2026-01-23 10:13:37.696519652 +0000 UTC m=+0.189365296 container start 27fa76d474e55b58fc1b312583d718d2983e464fd9ebd581a1adf34b129e44b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dewdney, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:13:37 compute-0 peaceful_dewdney[252351]: 167 167
Jan 23 10:13:37 compute-0 systemd[1]: libpod-27fa76d474e55b58fc1b312583d718d2983e464fd9ebd581a1adf34b129e44b6.scope: Deactivated successfully.
Jan 23 10:13:37 compute-0 podman[252335]: 2026-01-23 10:13:37.703160879 +0000 UTC m=+0.196006523 container attach 27fa76d474e55b58fc1b312583d718d2983e464fd9ebd581a1adf34b129e44b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dewdney, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Jan 23 10:13:37 compute-0 podman[252335]: 2026-01-23 10:13:37.703518339 +0000 UTC m=+0.196363983 container died 27fa76d474e55b58fc1b312583d718d2983e464fd9ebd581a1adf34b129e44b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 23 10:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-663900a9cdf6495a38476db6ec26df78ad0c6d52c7a81d4c426052c19ab1cc43-merged.mount: Deactivated successfully.
Jan 23 10:13:37 compute-0 podman[252335]: 2026-01-23 10:13:37.738551494 +0000 UTC m=+0.231397138 container remove 27fa76d474e55b58fc1b312583d718d2983e464fd9ebd581a1adf34b129e44b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dewdney, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:13:37 compute-0 systemd[1]: libpod-conmon-27fa76d474e55b58fc1b312583d718d2983e464fd9ebd581a1adf34b129e44b6.scope: Deactivated successfully.
Jan 23 10:13:37 compute-0 podman[252377]: 2026-01-23 10:13:37.909285925 +0000 UTC m=+0.043803612 container create 4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_carson, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:13:37 compute-0 systemd[1]: Started libpod-conmon-4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5.scope.
Jan 23 10:13:37 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a06231842d6fe4c905479a29e5e97489abfb570a0b2f57adc49f19b69bbafb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a06231842d6fe4c905479a29e5e97489abfb570a0b2f57adc49f19b69bbafb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a06231842d6fe4c905479a29e5e97489abfb570a0b2f57adc49f19b69bbafb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a06231842d6fe4c905479a29e5e97489abfb570a0b2f57adc49f19b69bbafb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:13:37 compute-0 podman[252377]: 2026-01-23 10:13:37.891938077 +0000 UTC m=+0.026455784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:13:37 compute-0 podman[252377]: 2026-01-23 10:13:37.994426349 +0000 UTC m=+0.128944066 container init 4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:13:38 compute-0 podman[252377]: 2026-01-23 10:13:38.002168327 +0000 UTC m=+0.136686014 container start 4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_carson, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:13:38 compute-0 podman[252377]: 2026-01-23 10:13:38.006040286 +0000 UTC m=+0.140557973 container attach 4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:13:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:38.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:13:38 compute-0 lvm[252469]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:13:38 compute-0 lvm[252469]: VG ceph_vg0 finished
Jan 23 10:13:38 compute-0 relaxed_carson[252393]: {}
Jan 23 10:13:38 compute-0 systemd[1]: libpod-4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5.scope: Deactivated successfully.
Jan 23 10:13:38 compute-0 podman[252377]: 2026-01-23 10:13:38.744379459 +0000 UTC m=+0.878897146 container died 4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_carson, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:13:38 compute-0 systemd[1]: libpod-4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5.scope: Consumed 1.094s CPU time.
Jan 23 10:13:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101338 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:13:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:38 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-71a06231842d6fe4c905479a29e5e97489abfb570a0b2f57adc49f19b69bbafb-merged.mount: Deactivated successfully.
Jan 23 10:13:38 compute-0 podman[252377]: 2026-01-23 10:13:38.970501047 +0000 UTC m=+1.105018734 container remove 4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_carson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 10:13:38 compute-0 systemd[1]: libpod-conmon-4abd877995850ae4b5d0a10d45b55557d66a2d7c6d58e8bfc4bd890055f287e5.scope: Deactivated successfully.
Jan 23 10:13:39 compute-0 sudo[252270]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:13:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:39 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:39 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:13:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:13:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:39.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:13:39 compute-0 sudo[252483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:13:39 compute-0 sudo[252483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:39 compute-0 sudo[252483]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:39 compute-0 ceph-mon[74335]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:13:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:13:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:39] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:13:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:39] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:13:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:40.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:13:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:40 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:41 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:13:41 compute-0 ceph-mon[74335]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:13:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:41 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:41.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:42.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:42 compute-0 sudo[252511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:13:42 compute-0 sudo[252511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:13:42 compute-0 sudo[252511]: pam_unix(sudo:session): session closed for user root
Jan 23 10:13:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:13:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:42 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:43.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:43 compute-0 podman[252537]: 2026-01-23 10:13:43.523208741 +0000 UTC m=+0.052800756 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:13:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:43.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:44.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 23 10:13:44 compute-0 ceph-mon[74335]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 23 10:13:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:44 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:45 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:45 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:45 compute-0 ceph-mon[74335]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 23 10:13:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:46.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Jan 23 10:13:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:47 compute-0 ceph-mon[74335]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Jan 23 10:13:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:47.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:47 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:47 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:47.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:48.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:13:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:13:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/239642082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:13:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:13:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/239642082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:13:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:48 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:49 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:49 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:49 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:13:49.241 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:13:49 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:13:49.243 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:13:49 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:13:49.244 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:13:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:49.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:49 compute-0 ceph-mon[74335]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:13:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/239642082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:13:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/239642082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:13:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:49] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:13:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:49] "GET /metrics HTTP/1.1" 200 48430 "" "Prometheus/2.51.0"
Jan 23 10:13:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:13:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:13:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:50.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:13:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:13:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:13:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:13:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:13:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:13:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:13:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:13:50 compute-0 ceph-mon[74335]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:13:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:51 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:51 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e70009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:52.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:13:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:52 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:53 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:53 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:53.598Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:53 compute-0 ceph-mon[74335]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:13:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:54.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:54 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:55 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:55 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:55.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:13:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:56.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:56 compute-0 ceph-mon[74335]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:13:56 compute-0 nova_compute[249229]: 2026-01-23 10:13:56.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:56 compute-0 nova_compute[249229]: 2026-01-23 10:13:56.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:56 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:56 compute-0 nova_compute[249229]: 2026-01-23 10:13:56.851 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:13:56 compute-0 nova_compute[249229]: 2026-01-23 10:13:56.851 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:13:56 compute-0 nova_compute[249229]: 2026-01-23 10:13:56.851 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:13:56 compute-0 nova_compute[249229]: 2026-01-23 10:13:56.852 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:13:56 compute-0 nova_compute[249229]: 2026-01-23 10:13:56.852 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:13:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:13:57.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:13:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:57 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:57 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:13:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774375893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:13:57 compute-0 nova_compute[249229]: 2026-01-23 10:13:57.319 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:13:57 compute-0 nova_compute[249229]: 2026-01-23 10:13:57.467 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:13:57 compute-0 nova_compute[249229]: 2026-01-23 10:13:57.468 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4885MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:13:57 compute-0 nova_compute[249229]: 2026-01-23 10:13:57.468 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:13:57 compute-0 nova_compute[249229]: 2026-01-23 10:13:57.469 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:13:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:13:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:57.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:13:57 compute-0 nova_compute[249229]: 2026-01-23 10:13:57.529 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:13:57 compute-0 nova_compute[249229]: 2026-01-23 10:13:57.529 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:13:57 compute-0 nova_compute[249229]: 2026-01-23 10:13:57.549 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:13:57 compute-0 ceph-mon[74335]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:13:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:13:58 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4074051618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:13:58 compute-0 nova_compute[249229]: 2026-01-23 10:13:58.064 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:13:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:13:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:58.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:13:58 compute-0 nova_compute[249229]: 2026-01-23 10:13:58.075 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:13:58 compute-0 nova_compute[249229]: 2026-01-23 10:13:58.099 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:13:58 compute-0 nova_compute[249229]: 2026-01-23 10:13:58.101 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:13:58 compute-0 nova_compute[249229]: 2026-01-23 10:13:58.101 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:13:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:13:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/774375893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:13:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4074051618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:13:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:58 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.093 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.093 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.094 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.094 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.108 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.109 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.109 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.109 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.109 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.110 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:13:59 compute-0 nova_compute[249229]: 2026-01-23 10:13:59.110 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:13:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:59 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:13:59 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:13:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:13:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:13:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:59.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:13:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:13:59.766 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:13:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:13:59.767 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:13:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:13:59.767 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:13:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:59] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:13:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:13:59] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:14:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:00.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:00 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:01 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c001f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:01 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:01.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:02.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:02 compute-0 ceph-mon[74335]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:02 compute-0 sudo[252621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:14:02 compute-0 sudo[252621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:02 compute-0 sudo[252621]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:02 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:03 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c001f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:03 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:03.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:03.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:03 compute-0 ceph-mon[74335]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2203974394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:14:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2596105723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:14:03 compute-0 ceph-mon[74335]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/515581582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:14:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3988597975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:14:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:04.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:04 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:14:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:14:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:05 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:05 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c001f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:05.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:05 compute-0 podman[252649]: 2026-01-23 10:14:05.57624072 +0000 UTC m=+0.093691636 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 23 10:14:05 compute-0 ceph-mon[74335]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:06.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:06 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:07 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:14:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:07.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:07 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:07 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:07.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:08.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:08 compute-0 ceph-mon[74335]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:08 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c001f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:09 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:09 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:09 compute-0 ceph-mon[74335]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:09.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:09] "GET /metrics HTTP/1.1" 200 48351 "" "Prometheus/2.51.0"
Jan 23 10:14:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:09] "GET /metrics HTTP/1.1" 200 48351 "" "Prometheus/2.51.0"
Jan 23 10:14:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:10.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:10 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:11 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:11 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:11 compute-0 ceph-mon[74335]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:11.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:12.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:12 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:13 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:13 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:13.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:13.600Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:14:13 compute-0 ceph-mon[74335]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:14.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:14 compute-0 podman[252686]: 2026-01-23 10:14:14.514428647 +0000 UTC m=+0.047528268 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 10:14:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:14 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:15 compute-0 ceph-mon[74335]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:15 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:15 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:15.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:16.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:16 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:17.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:14:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:17.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:14:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:17 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:17 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:17.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:17 compute-0 ceph-mon[74335]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:18.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:18 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:18 compute-0 ceph-mon[74335]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:19 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:19 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:19.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:19] "GET /metrics HTTP/1.1" 200 48351 "" "Prometheus/2.51.0"
Jan 23 10:14:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:19] "GET /metrics HTTP/1.1" 200 48351 "" "Prometheus/2.51.0"
Jan 23 10:14:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:14:19
Jan 23 10:14:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:14:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:14:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', '.nfs', 'backups', 'vms']
Jan 23 10:14:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:14:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:14:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:14:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:20.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:14:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:20 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:21 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:21 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c003ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:21 compute-0 ceph-mon[74335]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:21.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:22 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Check health
Jan 23 10:14:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:22.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:22 compute-0 sudo[252716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:14:22 compute-0 sudo[252716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:22 compute-0 sudo[252716]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:22 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:23 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:23 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:23.600Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:14:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:24.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:14:24 compute-0 ceph-mon[74335]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:24 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:25 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:25 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:25.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:26.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:26 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:27.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:27 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:27 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:27.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:28.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:28 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:29 compute-0 ceph-mon[74335]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:29 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:29 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:29.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:29] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:14:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:29] "GET /metrics HTTP/1.1" 200 48428 "" "Prometheus/2.51.0"
Jan 23 10:14:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:30.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:30 compute-0 ceph-mon[74335]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:30 compute-0 ceph-mon[74335]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:30 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:31 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:31 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:31.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:32.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:32 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:33 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:33 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:33 compute-0 ceph-mon[74335]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:33.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:33.601Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:14:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:34.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:14:34 compute-0 ceph-mon[74335]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:34 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:14:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:14:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:35 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:35 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:35 compute-0 ceph-mon[74335]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:14:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:35.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:36.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:36 compute-0 podman[252757]: 2026-01-23 10:14:36.602394718 +0000 UTC m=+0.127737873 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 10:14:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:36 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:37.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:37.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:37 compute-0 ceph-mon[74335]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:38.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:38 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:39 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:39 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:39.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:39 compute-0 ceph-mon[74335]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:39 compute-0 sudo[252787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:14:39 compute-0 sudo[252787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:39 compute-0 sudo[252787]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:39] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:14:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:39] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:14:40 compute-0 sudo[252812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:14:40 compute-0 sudo[252812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:40.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:40 compute-0 sudo[252812]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 10:14:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 10:14:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 23 10:14:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 10:14:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:40 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 10:14:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 10:14:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:41 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:41 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:14:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:41.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:14:41 compute-0 ceph-mon[74335]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:14:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:42.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:14:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:14:42 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:14:42 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:42 compute-0 sudo[252870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:14:42 compute-0 sudo[252870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:42 compute-0 sudo[252870]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:42 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 23 10:14:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 10:14:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:14:43 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:14:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:14:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:14:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:14:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:14:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:43.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:43.602Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:43 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:43 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:43 compute-0 ceph-mon[74335]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:43 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 10:14:43 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:14:43 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:14:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:14:44 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:14:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:14:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:14:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:14:44 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:14:44 compute-0 sudo[252896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:14:44 compute-0 sudo[252896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:44 compute-0 sudo[252896]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:44.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:44 compute-0 sudo[252921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:14:44 compute-0 sudo[252921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:44 compute-0 podman[252990]: 2026-01-23 10:14:44.577470941 +0000 UTC m=+0.026706102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:14:44 compute-0 podman[252990]: 2026-01-23 10:14:44.699635646 +0000 UTC m=+0.148870787 container create db13f25d3804656a4d0f131be9b6f558bed83afc4e94e956523d3208984b69b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 10:14:44 compute-0 systemd[1]: Started libpod-conmon-db13f25d3804656a4d0f131be9b6f558bed83afc4e94e956523d3208984b69b3.scope.
Jan 23 10:14:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:14:44 compute-0 podman[252990]: 2026-01-23 10:14:44.81467411 +0000 UTC m=+0.263909281 container init db13f25d3804656a4d0f131be9b6f558bed83afc4e94e956523d3208984b69b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 10:14:44 compute-0 podman[253004]: 2026-01-23 10:14:44.81965166 +0000 UTC m=+0.077119080 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 23 10:14:44 compute-0 podman[252990]: 2026-01-23 10:14:44.822925102 +0000 UTC m=+0.272160243 container start db13f25d3804656a4d0f131be9b6f558bed83afc4e94e956523d3208984b69b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:14:44 compute-0 podman[252990]: 2026-01-23 10:14:44.826670157 +0000 UTC m=+0.275905318 container attach db13f25d3804656a4d0f131be9b6f558bed83afc4e94e956523d3208984b69b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 10:14:44 compute-0 elastic_jennings[253008]: 167 167
Jan 23 10:14:44 compute-0 systemd[1]: libpod-db13f25d3804656a4d0f131be9b6f558bed83afc4e94e956523d3208984b69b3.scope: Deactivated successfully.
Jan 23 10:14:44 compute-0 podman[252990]: 2026-01-23 10:14:44.828780967 +0000 UTC m=+0.278016128 container died db13f25d3804656a4d0f131be9b6f558bed83afc4e94e956523d3208984b69b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6123ec73506bf5650d51920e29dab303ffcdef0cc25497144f3f1478f756134-merged.mount: Deactivated successfully.
Jan 23 10:14:44 compute-0 podman[252990]: 2026-01-23 10:14:44.87156638 +0000 UTC m=+0.320801521 container remove db13f25d3804656a4d0f131be9b6f558bed83afc4e94e956523d3208984b69b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_jennings, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Jan 23 10:14:44 compute-0 systemd[1]: libpod-conmon-db13f25d3804656a4d0f131be9b6f558bed83afc4e94e956523d3208984b69b3.scope: Deactivated successfully.
Jan 23 10:14:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:44 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:44 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:44 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:44 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:14:44 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:14:44 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:14:44 compute-0 ceph-mon[74335]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:45 compute-0 podman[253049]: 2026-01-23 10:14:45.034386219 +0000 UTC m=+0.047706693 container create cf6a251967d45e62232606a4b134555534b7fc840b0343e2240f2dfa5b09bd96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 10:14:45 compute-0 podman[253049]: 2026-01-23 10:14:45.015049665 +0000 UTC m=+0.028370169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:14:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:45 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:45 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:45 compute-0 systemd[1]: Started libpod-conmon-cf6a251967d45e62232606a4b134555534b7fc840b0343e2240f2dfa5b09bd96.scope.
Jan 23 10:14:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2457a0d2c4cf7207e3cf220fc05c816b7d183013bbf5702f6c11752067891a26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2457a0d2c4cf7207e3cf220fc05c816b7d183013bbf5702f6c11752067891a26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2457a0d2c4cf7207e3cf220fc05c816b7d183013bbf5702f6c11752067891a26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2457a0d2c4cf7207e3cf220fc05c816b7d183013bbf5702f6c11752067891a26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2457a0d2c4cf7207e3cf220fc05c816b7d183013bbf5702f6c11752067891a26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:45 compute-0 podman[253049]: 2026-01-23 10:14:45.497536993 +0000 UTC m=+0.510857497 container init cf6a251967d45e62232606a4b134555534b7fc840b0343e2240f2dfa5b09bd96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:14:45 compute-0 podman[253049]: 2026-01-23 10:14:45.503090349 +0000 UTC m=+0.516410823 container start cf6a251967d45e62232606a4b134555534b7fc840b0343e2240f2dfa5b09bd96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euler, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:14:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:14:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:45.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:14:45 compute-0 podman[253049]: 2026-01-23 10:14:45.614267065 +0000 UTC m=+0.627587539 container attach cf6a251967d45e62232606a4b134555534b7fc840b0343e2240f2dfa5b09bd96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euler, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:14:45 compute-0 elastic_euler[253065]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:14:45 compute-0 elastic_euler[253065]: --> All data devices are unavailable
Jan 23 10:14:45 compute-0 systemd[1]: libpod-cf6a251967d45e62232606a4b134555534b7fc840b0343e2240f2dfa5b09bd96.scope: Deactivated successfully.
Jan 23 10:14:45 compute-0 podman[253049]: 2026-01-23 10:14:45.85447945 +0000 UTC m=+0.867799954 container died cf6a251967d45e62232606a4b134555534b7fc840b0343e2240f2dfa5b09bd96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:14:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2457a0d2c4cf7207e3cf220fc05c816b7d183013bbf5702f6c11752067891a26-merged.mount: Deactivated successfully.
Jan 23 10:14:46 compute-0 podman[253049]: 2026-01-23 10:14:46.093904263 +0000 UTC m=+1.107224737 container remove cf6a251967d45e62232606a4b134555534b7fc840b0343e2240f2dfa5b09bd96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 23 10:14:46 compute-0 systemd[1]: libpod-conmon-cf6a251967d45e62232606a4b134555534b7fc840b0343e2240f2dfa5b09bd96.scope: Deactivated successfully.
Jan 23 10:14:46 compute-0 sudo[252921]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:46.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:46 compute-0 sudo[253097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:14:46 compute-0 sudo[253097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:46 compute-0 sudo[253097]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:46 compute-0 sudo[253122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:14:46 compute-0 sudo[253122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:46 compute-0 podman[253190]: 2026-01-23 10:14:46.611819207 +0000 UTC m=+0.038299888 container create 1cddabdf673f9e05948e2795d7d85154b41edf81f3ba749c020bfd263ebeb3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:14:46 compute-0 systemd[1]: Started libpod-conmon-1cddabdf673f9e05948e2795d7d85154b41edf81f3ba749c020bfd263ebeb3a5.scope.
Jan 23 10:14:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:14:46 compute-0 podman[253190]: 2026-01-23 10:14:46.595685083 +0000 UTC m=+0.022165784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:14:46 compute-0 podman[253190]: 2026-01-23 10:14:46.695059128 +0000 UTC m=+0.121539849 container init 1cddabdf673f9e05948e2795d7d85154b41edf81f3ba749c020bfd263ebeb3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:14:46 compute-0 podman[253190]: 2026-01-23 10:14:46.700987624 +0000 UTC m=+0.127468305 container start 1cddabdf673f9e05948e2795d7d85154b41edf81f3ba749c020bfd263ebeb3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 10:14:46 compute-0 gallant_brown[253206]: 167 167
Jan 23 10:14:46 compute-0 systemd[1]: libpod-1cddabdf673f9e05948e2795d7d85154b41edf81f3ba749c020bfd263ebeb3a5.scope: Deactivated successfully.
Jan 23 10:14:46 compute-0 podman[253190]: 2026-01-23 10:14:46.705689257 +0000 UTC m=+0.132170038 container attach 1cddabdf673f9e05948e2795d7d85154b41edf81f3ba749c020bfd263ebeb3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_brown, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 10:14:46 compute-0 podman[253190]: 2026-01-23 10:14:46.706673804 +0000 UTC m=+0.133154495 container died 1cddabdf673f9e05948e2795d7d85154b41edf81f3ba749c020bfd263ebeb3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_brown, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 10:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7666bf3b4e82f604a5c1f9e16da9031b65b505a64ec074cffc1c1405ce5c2014-merged.mount: Deactivated successfully.
Jan 23 10:14:46 compute-0 podman[253190]: 2026-01-23 10:14:46.747841262 +0000 UTC m=+0.174321943 container remove 1cddabdf673f9e05948e2795d7d85154b41edf81f3ba749c020bfd263ebeb3a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:14:46 compute-0 systemd[1]: libpod-conmon-1cddabdf673f9e05948e2795d7d85154b41edf81f3ba749c020bfd263ebeb3a5.scope: Deactivated successfully.
Jan 23 10:14:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:46 compute-0 podman[253231]: 2026-01-23 10:14:46.923559333 +0000 UTC m=+0.054876244 container create 82ebca33257ab3f899f6f5f4e2cb6a35246064668a4c64eb2cd755f6743eed42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:14:46 compute-0 systemd[1]: Started libpod-conmon-82ebca33257ab3f899f6f5f4e2cb6a35246064668a4c64eb2cd755f6743eed42.scope.
Jan 23 10:14:46 compute-0 podman[253231]: 2026-01-23 10:14:46.902275495 +0000 UTC m=+0.033592416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:14:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9fd353ce6e8792061c7f316e0e85692ca5bacd8e87df31e33f0a6ce86443c58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9fd353ce6e8792061c7f316e0e85692ca5bacd8e87df31e33f0a6ce86443c58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9fd353ce6e8792061c7f316e0e85692ca5bacd8e87df31e33f0a6ce86443c58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9fd353ce6e8792061c7f316e0e85692ca5bacd8e87df31e33f0a6ce86443c58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:47 compute-0 podman[253231]: 2026-01-23 10:14:47.01665206 +0000 UTC m=+0.147968951 container init 82ebca33257ab3f899f6f5f4e2cb6a35246064668a4c64eb2cd755f6743eed42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 10:14:47 compute-0 podman[253231]: 2026-01-23 10:14:47.022560776 +0000 UTC m=+0.153877667 container start 82ebca33257ab3f899f6f5f4e2cb6a35246064668a4c64eb2cd755f6743eed42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:14:47 compute-0 podman[253231]: 2026-01-23 10:14:47.025651123 +0000 UTC m=+0.156968014 container attach 82ebca33257ab3f899f6f5f4e2cb6a35246064668a4c64eb2cd755f6743eed42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 23 10:14:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:47.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:47 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.209957) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163287210055, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2143, "num_deletes": 251, "total_data_size": 4277833, "memory_usage": 4360592, "flush_reason": "Manual Compaction"}
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 23 10:14:47 compute-0 ceph-mon[74335]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163287244521, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4161547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20103, "largest_seqno": 22245, "table_properties": {"data_size": 4151860, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19752, "raw_average_key_size": 20, "raw_value_size": 4132530, "raw_average_value_size": 4225, "num_data_blocks": 268, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163067, "oldest_key_time": 1769163067, "file_creation_time": 1769163287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 34659 microseconds, and 8705 cpu microseconds.
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.244605) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4161547 bytes OK
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.244645) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.247260) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.247296) EVENT_LOG_v1 {"time_micros": 1769163287247288, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.247323) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4269133, prev total WAL file size 4269133, number of live WAL files 2.
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.249246) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:14:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:47 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(4064KB)], [44(12MB)]
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163287249376, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17420156, "oldest_snapshot_seqno": -1}
Jan 23 10:14:47 compute-0 kind_archimedes[253248]: {
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:     "1": [
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:         {
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "devices": [
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "/dev/loop3"
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             ],
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "lv_name": "ceph_lv0",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "lv_size": "21470642176",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "name": "ceph_lv0",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "tags": {
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.cluster_name": "ceph",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.crush_device_class": "",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.encrypted": "0",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.osd_id": "1",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.type": "block",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.vdo": "0",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:                 "ceph.with_tpm": "0"
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             },
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "type": "block",
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:             "vg_name": "ceph_vg0"
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:         }
Jan 23 10:14:47 compute-0 kind_archimedes[253248]:     ]
Jan 23 10:14:47 compute-0 kind_archimedes[253248]: }
Jan 23 10:14:47 compute-0 systemd[1]: libpod-82ebca33257ab3f899f6f5f4e2cb6a35246064668a4c64eb2cd755f6743eed42.scope: Deactivated successfully.
Jan 23 10:14:47 compute-0 podman[253231]: 2026-01-23 10:14:47.342815483 +0000 UTC m=+0.474132424 container died 82ebca33257ab3f899f6f5f4e2cb6a35246064668a4c64eb2cd755f6743eed42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_archimedes, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5498 keys, 15176884 bytes, temperature: kUnknown
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163287381190, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 15176884, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15137226, "index_size": 24828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 138551, "raw_average_key_size": 25, "raw_value_size": 15034704, "raw_average_value_size": 2734, "num_data_blocks": 1026, "num_entries": 5498, "num_filter_entries": 5498, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769163287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.381485) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15176884 bytes
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.383195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.1 rd, 115.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.6 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 6016, records dropped: 518 output_compression: NoCompression
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.383260) EVENT_LOG_v1 {"time_micros": 1769163287383241, "job": 22, "event": "compaction_finished", "compaction_time_micros": 131898, "compaction_time_cpu_micros": 34487, "output_level": 6, "num_output_files": 1, "total_output_size": 15176884, "num_input_records": 6016, "num_output_records": 5498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163287384009, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163287386162, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.249159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.386219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.386225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.386227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.386228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:14:47 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:14:47.386229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9fd353ce6e8792061c7f316e0e85692ca5bacd8e87df31e33f0a6ce86443c58-merged.mount: Deactivated successfully.
Jan 23 10:14:47 compute-0 podman[253231]: 2026-01-23 10:14:47.416950848 +0000 UTC m=+0.548267739 container remove 82ebca33257ab3f899f6f5f4e2cb6a35246064668a4c64eb2cd755f6743eed42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_archimedes, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:14:47 compute-0 systemd[1]: libpod-conmon-82ebca33257ab3f899f6f5f4e2cb6a35246064668a4c64eb2cd755f6743eed42.scope: Deactivated successfully.
Jan 23 10:14:47 compute-0 sudo[253122]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:47 compute-0 sudo[253269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:14:47 compute-0 sudo[253269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:47 compute-0 sudo[253269]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:47.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:47 compute-0 sudo[253294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:14:47 compute-0 sudo[253294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:47 compute-0 podman[253363]: 2026-01-23 10:14:47.993556132 +0000 UTC m=+0.054745180 container create 7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:14:48 compute-0 systemd[1]: Started libpod-conmon-7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243.scope.
Jan 23 10:14:48 compute-0 podman[253363]: 2026-01-23 10:14:47.962297153 +0000 UTC m=+0.023486221 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:14:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:14:48 compute-0 podman[253363]: 2026-01-23 10:14:48.083128421 +0000 UTC m=+0.144317479 container init 7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 10:14:48 compute-0 podman[253363]: 2026-01-23 10:14:48.090295033 +0000 UTC m=+0.151484071 container start 7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 10:14:48 compute-0 systemd[1]: libpod-7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243.scope: Deactivated successfully.
Jan 23 10:14:48 compute-0 objective_nash[253379]: 167 167
Jan 23 10:14:48 compute-0 conmon[253379]: conmon 7d37ab9697c3db8ff358 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243.scope/container/memory.events
Jan 23 10:14:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:48 compute-0 podman[253363]: 2026-01-23 10:14:48.168667317 +0000 UTC m=+0.229856385 container attach 7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 10:14:48 compute-0 podman[253363]: 2026-01-23 10:14:48.169710836 +0000 UTC m=+0.230899884 container died 7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e089e5e5402a5834b7a15bd0220f2347b200694df311ef7dd63b99ebdb3ca3e2-merged.mount: Deactivated successfully.
Jan 23 10:14:48 compute-0 podman[253363]: 2026-01-23 10:14:48.245197479 +0000 UTC m=+0.306386517 container remove 7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:14:48 compute-0 systemd[1]: libpod-conmon-7d37ab9697c3db8ff3580abfdf96da09dcca3b4fd13e72e7a84b3e42cdb34243.scope: Deactivated successfully.
Jan 23 10:14:48 compute-0 podman[253406]: 2026-01-23 10:14:48.42520855 +0000 UTC m=+0.042698532 container create 10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 10:14:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:48 compute-0 systemd[1]: Started libpod-conmon-10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b.scope.
Jan 23 10:14:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b653e0004ff32dde6803d1106a932f25e8dedef844551a7f263b6d2b740275c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b653e0004ff32dde6803d1106a932f25e8dedef844551a7f263b6d2b740275c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b653e0004ff32dde6803d1106a932f25e8dedef844551a7f263b6d2b740275c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b653e0004ff32dde6803d1106a932f25e8dedef844551a7f263b6d2b740275c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:14:48 compute-0 podman[253406]: 2026-01-23 10:14:48.407293686 +0000 UTC m=+0.024783698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:14:48 compute-0 podman[253406]: 2026-01-23 10:14:48.528527705 +0000 UTC m=+0.146017707 container init 10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_chandrasekhar, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:14:48 compute-0 podman[253406]: 2026-01-23 10:14:48.534636187 +0000 UTC m=+0.152126169 container start 10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:14:48 compute-0 podman[253406]: 2026-01-23 10:14:48.559794364 +0000 UTC m=+0.177284356 container attach 10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_chandrasekhar, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:14:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:14:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3340243466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:14:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:14:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3340243466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:14:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=plugins.update.checker t=2026-01-23T10:14:48.748343586Z level=info msg="Update check succeeded" duration=54.340288ms
Jan 23 10:14:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=grafana.update.checker t=2026-01-23T10:14:48.749095368Z level=info msg="Update check succeeded" duration=51.284462ms
Jan 23 10:14:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=cleanup t=2026-01-23T10:14:48.808468087Z level=info msg="Completed cleanup jobs" duration=184.428216ms
Jan 23 10:14:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:48 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:49 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:49 compute-0 lvm[253497]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:14:49 compute-0 lvm[253497]: VG ceph_vg0 finished
Jan 23 10:14:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:49 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:49 compute-0 flamboyant_chandrasekhar[253423]: {}
Jan 23 10:14:49 compute-0 systemd[1]: libpod-10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b.scope: Deactivated successfully.
Jan 23 10:14:49 compute-0 systemd[1]: libpod-10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b.scope: Consumed 1.125s CPU time.
Jan 23 10:14:49 compute-0 conmon[253423]: conmon 10e0bbfe308612f32324 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b.scope/container/memory.events
Jan 23 10:14:49 compute-0 podman[253406]: 2026-01-23 10:14:49.313228782 +0000 UTC m=+0.930718764 container died 10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_chandrasekhar, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b653e0004ff32dde6803d1106a932f25e8dedef844551a7f263b6d2b740275c0-merged.mount: Deactivated successfully.
Jan 23 10:14:49 compute-0 podman[253406]: 2026-01-23 10:14:49.361952882 +0000 UTC m=+0.979442864 container remove 10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 10:14:49 compute-0 systemd[1]: libpod-conmon-10e0bbfe308612f32324a468486fd582b85aef40cf60a36911318c8ad135278b.scope: Deactivated successfully.
Jan 23 10:14:49 compute-0 sudo[253294]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:14:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:14:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:49 compute-0 sudo[253514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:14:49 compute-0 sudo[253514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:14:49 compute-0 sudo[253514]: pam_unix(sudo:session): session closed for user root
Jan 23 10:14:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:49.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:49 compute-0 ceph-mon[74335]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3340243466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:14:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3340243466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:14:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:14:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:49] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:14:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:49] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Jan 23 10:14:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:14:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:14:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:14:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:14:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:14:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:14:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:14:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:14:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:50.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:14:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:51 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:51 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:51.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:51 compute-0 ceph-mon[74335]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:52.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:52 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:53 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:53 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:53.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:53.603Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:53 compute-0 ceph-mon[74335]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:14:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:54.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:54 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a8a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:54 compute-0 ceph-mon[74335]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:55 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e7000a8a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:55 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:55.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:14:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:56.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:56 compute-0 nova_compute[249229]: 2026-01-23 10:14:56.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:14:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:56 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:57.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:14:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:57.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:14:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:14:57.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:14:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:57 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:57 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e600014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:57.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:57 compute-0 ceph-mon[74335]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:14:57 compute-0 nova_compute[249229]: 2026-01-23 10:14:57.709 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:14:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:14:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:58.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:14:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:58 compute-0 nova_compute[249229]: 2026-01-23 10:14:58.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:14:58 compute-0 nova_compute[249229]: 2026-01-23 10:14:58.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:14:58 compute-0 nova_compute[249229]: 2026-01-23 10:14:58.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:14:58 compute-0 nova_compute[249229]: 2026-01-23 10:14:58.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:14:58 compute-0 nova_compute[249229]: 2026-01-23 10:14:58.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:14:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:58 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:59 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:14:59 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:14:59 compute-0 nova_compute[249229]: 2026-01-23 10:14:59.374 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:14:59 compute-0 nova_compute[249229]: 2026-01-23 10:14:59.375 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:14:59 compute-0 nova_compute[249229]: 2026-01-23 10:14:59.375 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:14:59 compute-0 nova_compute[249229]: 2026-01-23 10:14:59.375 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:14:59 compute-0 nova_compute[249229]: 2026-01-23 10:14:59.376 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:14:59 compute-0 ceph-mon[74335]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:14:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:14:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:14:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:59.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:14:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:14:59.768 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:14:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:14:59.770 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:14:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:14:59.770 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:14:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:59] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:14:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:14:59] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Jan 23 10:15:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:15:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315230113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.079 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.703s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:15:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:00.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.238 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.239 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4915MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.239 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.239 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.383 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.383 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.399 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:15:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/315230113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:15:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458334425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.862 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:15:00 compute-0 nova_compute[249229]: 2026-01-23 10:15:00.868 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:15:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:00 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e600014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.127380) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163301127416, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 381, "num_deletes": 250, "total_data_size": 339373, "memory_usage": 347328, "flush_reason": "Manual Compaction"}
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163301132306, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 319310, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22247, "largest_seqno": 22626, "table_properties": {"data_size": 316996, "index_size": 478, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6033, "raw_average_key_size": 19, "raw_value_size": 312329, "raw_average_value_size": 1017, "num_data_blocks": 20, "num_entries": 307, "num_filter_entries": 307, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163287, "oldest_key_time": 1769163287, "file_creation_time": 1769163301, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 4989 microseconds, and 1664 cpu microseconds.
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.132366) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 319310 bytes OK
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.132384) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.134081) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.134099) EVENT_LOG_v1 {"time_micros": 1769163301134093, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.134115) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 336924, prev total WAL file size 336924, number of live WAL files 2.
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.134481) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(311KB)], [47(14MB)]
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163301134539, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 15496194, "oldest_snapshot_seqno": -1}
Jan 23 10:15:01 compute-0 nova_compute[249229]: 2026-01-23 10:15:01.146 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:15:01 compute-0 nova_compute[249229]: 2026-01-23 10:15:01.147 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:15:01 compute-0 nova_compute[249229]: 2026-01-23 10:15:01.148 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:15:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:01 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5295 keys, 11393396 bytes, temperature: kUnknown
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163301206770, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 11393396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11359577, "index_size": 19501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 134750, "raw_average_key_size": 25, "raw_value_size": 11264974, "raw_average_value_size": 2127, "num_data_blocks": 794, "num_entries": 5295, "num_filter_entries": 5295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769163301, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.207041) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 11393396 bytes
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.208519) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.3 rd, 157.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(84.2) write-amplify(35.7) OK, records in: 5805, records dropped: 510 output_compression: NoCompression
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.208540) EVENT_LOG_v1 {"time_micros": 1769163301208530, "job": 24, "event": "compaction_finished", "compaction_time_micros": 72318, "compaction_time_cpu_micros": 24099, "output_level": 6, "num_output_files": 1, "total_output_size": 11393396, "num_input_records": 5805, "num_output_records": 5295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163301208767, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163301211833, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.134415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.211872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.211877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.211879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.211881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:15:01 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:15:01.211883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:15:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:01 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:15:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:15:01 compute-0 ceph-mon[74335]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2458334425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:02 compute-0 nova_compute[249229]: 2026-01-23 10:15:02.140 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:15:02 compute-0 nova_compute[249229]: 2026-01-23 10:15:02.140 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:15:02 compute-0 nova_compute[249229]: 2026-01-23 10:15:02.140 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:15:02 compute-0 nova_compute[249229]: 2026-01-23 10:15:02.141 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:15:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:02.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:02 compute-0 nova_compute[249229]: 2026-01-23 10:15:02.232 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:15:02 compute-0 nova_compute[249229]: 2026-01-23 10:15:02.233 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:15:02 compute-0 nova_compute[249229]: 2026-01-23 10:15:02.233 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:15:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:15:02 compute-0 sudo[253599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:15:02 compute-0 sudo[253599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:02 compute-0 sudo[253599]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:02 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/817666258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:02 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:03 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e600014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:03 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:03.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:15:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:03.604Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:03 compute-0 ceph-mon[74335]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:15:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4283179534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3691473830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3138463557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:04.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:04 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:04 compute-0 ceph-mon[74335]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:15:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:15:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:05 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:05 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e600014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:15:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:15:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:06.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:15:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:15:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:06 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e540033e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:07 compute-0 ceph-mon[74335]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:15:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:07.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:15:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:07.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:07 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:07 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e4c0044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:07 compute-0 podman[253628]: 2026-01-23 10:15:07.568017106 +0000 UTC m=+0.097334308 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 10:15:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:07.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:08.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:08 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e600014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:09 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e600014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:09 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:09 compute-0 ceph-mon[74335]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:09.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:09] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:15:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:09] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:15:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:10.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:10 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:11 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e44000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:11 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e74001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:11.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:11 compute-0 ceph-mon[74335]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:12.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:15:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:12 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e74001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:13 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:13 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:15:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:13.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:15:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:13.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:15:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:13.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:15:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:13.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:14 compute-0 ceph-mon[74335]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:15:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:14.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:14 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:14 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:15 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e74001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:15 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:15 compute-0 podman[253664]: 2026-01-23 10:15:15.520542304 +0000 UTC m=+0.048850155 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 10:15:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:15 compute-0 ceph-mon[74335]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:16.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:15:16 compute-0 ceph-mon[74335]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:15:16 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:16 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:17.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:17 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e74001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:17 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:15:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:17.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:15:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:18.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:18 compute-0 ceph-mon[74335]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:18 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:19 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:19 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740030b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:19.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:19] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:15:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:19] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Jan 23 10:15:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:15:19
Jan 23 10:15:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:15:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:15:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'vms', '.mgr', '.nfs', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 23 10:15:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:15:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:15:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:15:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:20.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:15:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:15:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:20 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:21 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e44001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:21 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50003650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:21 compute-0 ceph-mon[74335]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:22.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:15:22 compute-0 sudo[253691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:15:22 compute-0 sudo[253691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:22 compute-0 sudo[253691]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:22 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:23 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:23 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e44001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:23.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:23.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:23 compute-0 ceph-mon[74335]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:15:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:24.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:24 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:24 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:25 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:25 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740030b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:25.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:25 compute-0 ceph-mon[74335]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:26.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:15:26 compute-0 ceph-mon[74335]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:15:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:26 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e44002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:27.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:27 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:27 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:28.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:28 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:29 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:29 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:29.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:29 compute-0 ceph-mon[74335]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:29] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:15:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:29] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Jan 23 10:15:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:30.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:30 compute-0 ceph-mon[74335]: pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:30 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:30 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:31 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:31 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:32.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:15:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:32 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:33 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:33 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:33.607Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:33 compute-0 ceph-mon[74335]: pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:15:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=404 latency=0.002000056s ======
Jan 23 10:15:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:34.070 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.002000056s
Jan 23 10:15:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - - [23/Jan/2026:10:15:34.090 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000028s
Jan 23 10:15:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:34.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:34 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:34 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:34 compute-0 ceph-mon[74335]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:15:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:15:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:35 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:35 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:35.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:15:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:15:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:36.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:15:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:15:36 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:36 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:37.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:37 compute-0 ceph-mon[74335]: pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:15:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:37 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:15:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:37.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:15:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 23 10:15:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 23 10:15:38 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 23 10:15:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:38.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 23 10:15:38 compute-0 podman[253732]: 2026-01-23 10:15:38.581181297 +0000 UTC m=+0.101377886 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 10:15:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:38 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 23 10:15:39 compute-0 ceph-mon[74335]: osdmap e139: 3 total, 3 up, 3 in
Jan 23 10:15:39 compute-0 ceph-mon[74335]: pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 23 10:15:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 23 10:15:39 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 23 10:15:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:39 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e50004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:39 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:39.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:39] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:15:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:39] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:15:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:40.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 23 10:15:40 compute-0 ceph-mon[74335]: osdmap e140: 3 total, 3 up, 3 in
Jan 23 10:15:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 23 10:15:40 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 23 10:15:40 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:40 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:41 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:41 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:15:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:41.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:15:41 compute-0 ceph-mon[74335]: pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:41 compute-0 ceph-mon[74335]: osdmap e141: 3 total, 3 up, 3 in
Jan 23 10:15:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:42.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 21 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Jan 23 10:15:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 23 10:15:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 23 10:15:42 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 23 10:15:42 compute-0 ceph-mon[74335]: pgmap v668: 353 pgs: 353 active+clean; 21 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Jan 23 10:15:42 compute-0 sudo[253765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:15:42 compute-0 sudo[253765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:42 compute-0 sudo[253765]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:42 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:43 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:43.608Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:15:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:43.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:15:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:43.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 21 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Jan 23 10:15:44 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:44 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:45 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:45 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:45 compute-0 ceph-mon[74335]: osdmap e142: 3 total, 3 up, 3 in
Jan 23 10:15:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 23 10:15:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 23 10:15:46 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 23 10:15:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:46.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 64 op/s
Jan 23 10:15:46 compute-0 podman[253794]: 2026-01-23 10:15:46.530334196 +0000 UTC m=+0.056717100 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:15:46 compute-0 ceph-mon[74335]: pgmap v670: 353 pgs: 353 active+clean; 21 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Jan 23 10:15:46 compute-0 ceph-mon[74335]: osdmap e143: 3 total, 3 up, 3 in
Jan 23 10:15:46 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:46 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:47.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:47 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:47 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:15:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:47.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:15:47 compute-0 ceph-mon[74335]: pgmap v672: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 64 op/s
Jan 23 10:15:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Jan 23 10:15:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:15:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3735003259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:15:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:15:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3735003259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:15:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:48 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3735003259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:15:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3735003259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:15:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:49 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:49 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:49 compute-0 sudo[253814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:15:49 compute-0 sudo[253814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:49 compute-0 sudo[253814]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:49 compute-0 sudo[253839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:15:49 compute-0 sudo[253839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:49] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:15:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:49] "GET /metrics HTTP/1.1" 200 48429 "" "Prometheus/2.51.0"
Jan 23 10:15:50 compute-0 ceph-mon[74335]: pgmap v673: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Jan 23 10:15:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:15:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:15:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:15:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:15:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:15:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:15:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:15:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:15:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:50 compute-0 sudo[253839]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Jan 23 10:15:50 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:50 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:15:51 compute-0 ceph-mon[74335]: pgmap v674: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Jan 23 10:15:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:51 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:51 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:52.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.1 MiB/s wr, 20 op/s
Jan 23 10:15:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:15:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:15:52 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:52 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:53 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:53 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:15:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:15:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:15:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:15:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:15:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:53.610Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:15:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:53.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:15:53 compute-0 ceph-mon[74335]: pgmap v675: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.1 MiB/s wr, 20 op/s
Jan 23 10:15:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:15:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:15:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:15:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:15:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:15:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:15:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:15:53 compute-0 sudo[253898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:15:53 compute-0 sudo[253898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:53 compute-0 sudo[253898]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:53 compute-0 sudo[253923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:15:53 compute-0 sudo[253923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:54 compute-0 podman[253987]: 2026-01-23 10:15:54.21698533 +0000 UTC m=+0.039079409 container create 7b9da0756f0409e0c83f3ea529ff92227c44079f494b846d950d7343fbafc874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 10:15:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:54.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:54 compute-0 systemd[1]: Started libpod-conmon-7b9da0756f0409e0c83f3ea529ff92227c44079f494b846d950d7343fbafc874.scope.
Jan 23 10:15:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:15:54 compute-0 podman[253987]: 2026-01-23 10:15:54.199161995 +0000 UTC m=+0.021256094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:15:54 compute-0 podman[253987]: 2026-01-23 10:15:54.305964984 +0000 UTC m=+0.128059083 container init 7b9da0756f0409e0c83f3ea529ff92227c44079f494b846d950d7343fbafc874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:15:54 compute-0 podman[253987]: 2026-01-23 10:15:54.313170398 +0000 UTC m=+0.135264477 container start 7b9da0756f0409e0c83f3ea529ff92227c44079f494b846d950d7343fbafc874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:15:54 compute-0 podman[253987]: 2026-01-23 10:15:54.317224153 +0000 UTC m=+0.139318262 container attach 7b9da0756f0409e0c83f3ea529ff92227c44079f494b846d950d7343fbafc874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:15:54 compute-0 gracious_solomon[254003]: 167 167
Jan 23 10:15:54 compute-0 systemd[1]: libpod-7b9da0756f0409e0c83f3ea529ff92227c44079f494b846d950d7343fbafc874.scope: Deactivated successfully.
Jan 23 10:15:54 compute-0 podman[253987]: 2026-01-23 10:15:54.319161928 +0000 UTC m=+0.141256027 container died 7b9da0756f0409e0c83f3ea529ff92227c44079f494b846d950d7343fbafc874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:15:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdc2ff1e9f305ede7fc21e0494c2dbe994836b2612c97a534809d13f6e24b87e-merged.mount: Deactivated successfully.
Jan 23 10:15:54 compute-0 podman[253987]: 2026-01-23 10:15:54.361857819 +0000 UTC m=+0.183951898 container remove 7b9da0756f0409e0c83f3ea529ff92227c44079f494b846d950d7343fbafc874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:15:54 compute-0 systemd[1]: libpod-conmon-7b9da0756f0409e0c83f3ea529ff92227c44079f494b846d950d7343fbafc874.scope: Deactivated successfully.
Jan 23 10:15:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 23 10:15:54 compute-0 podman[254030]: 2026-01-23 10:15:54.540487445 +0000 UTC m=+0.057657666 container create 6c1cbc8f126ffc6b2f94ec083d5933d4dca52f53b97373b8faefbee087d40beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 10:15:54 compute-0 systemd[1]: Started libpod-conmon-6c1cbc8f126ffc6b2f94ec083d5933d4dca52f53b97373b8faefbee087d40beb.scope.
Jan 23 10:15:54 compute-0 podman[254030]: 2026-01-23 10:15:54.518179702 +0000 UTC m=+0.035350023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:15:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/570554f3702b1a25f8bc6762a77b2c273b72ee8af312d312d7e4805c105e4860/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/570554f3702b1a25f8bc6762a77b2c273b72ee8af312d312d7e4805c105e4860/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/570554f3702b1a25f8bc6762a77b2c273b72ee8af312d312d7e4805c105e4860/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/570554f3702b1a25f8bc6762a77b2c273b72ee8af312d312d7e4805c105e4860/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/570554f3702b1a25f8bc6762a77b2c273b72ee8af312d312d7e4805c105e4860/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:54 compute-0 podman[254030]: 2026-01-23 10:15:54.743855902 +0000 UTC m=+0.261026143 container init 6c1cbc8f126ffc6b2f94ec083d5933d4dca52f53b97373b8faefbee087d40beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_rosalind, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 10:15:54 compute-0 podman[254030]: 2026-01-23 10:15:54.757559021 +0000 UTC m=+0.274729282 container start 6c1cbc8f126ffc6b2f94ec083d5933d4dca52f53b97373b8faefbee087d40beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:15:54 compute-0 podman[254030]: 2026-01-23 10:15:54.7621097 +0000 UTC m=+0.279279951 container attach 6c1cbc8f126ffc6b2f94ec083d5933d4dca52f53b97373b8faefbee087d40beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:15:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:15:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:15:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:15:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:15:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:15:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:54 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:55 compute-0 thirsty_rosalind[254047]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:15:55 compute-0 thirsty_rosalind[254047]: --> All data devices are unavailable
Jan 23 10:15:55 compute-0 systemd[1]: libpod-6c1cbc8f126ffc6b2f94ec083d5933d4dca52f53b97373b8faefbee087d40beb.scope: Deactivated successfully.
Jan 23 10:15:55 compute-0 podman[254030]: 2026-01-23 10:15:55.125527877 +0000 UTC m=+0.642698118 container died 6c1cbc8f126ffc6b2f94ec083d5933d4dca52f53b97373b8faefbee087d40beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_rosalind, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 10:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-570554f3702b1a25f8bc6762a77b2c273b72ee8af312d312d7e4805c105e4860-merged.mount: Deactivated successfully.
Jan 23 10:15:55 compute-0 podman[254030]: 2026-01-23 10:15:55.173814226 +0000 UTC m=+0.690984447 container remove 6c1cbc8f126ffc6b2f94ec083d5933d4dca52f53b97373b8faefbee087d40beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 23 10:15:55 compute-0 systemd[1]: libpod-conmon-6c1cbc8f126ffc6b2f94ec083d5933d4dca52f53b97373b8faefbee087d40beb.scope: Deactivated successfully.
Jan 23 10:15:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:55 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:55 compute-0 sudo[253923]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:55 compute-0 sudo[254074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:15:55 compute-0 sudo[254074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:55 compute-0 sudo[254074]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:55 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:55 compute-0 sudo[254099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:15:55 compute-0 sudo[254099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:55 compute-0 podman[254166]: 2026-01-23 10:15:55.735938508 +0000 UTC m=+0.039703767 container create 54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cannon, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:15:55 compute-0 systemd[1]: Started libpod-conmon-54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe.scope.
Jan 23 10:15:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:15:55 compute-0 podman[254166]: 2026-01-23 10:15:55.8167506 +0000 UTC m=+0.120515879 container init 54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:15:55 compute-0 podman[254166]: 2026-01-23 10:15:55.721515259 +0000 UTC m=+0.025280548 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:15:55 compute-0 podman[254166]: 2026-01-23 10:15:55.823002717 +0000 UTC m=+0.126767976 container start 54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 23 10:15:55 compute-0 podman[254166]: 2026-01-23 10:15:55.826499286 +0000 UTC m=+0.130264575 container attach 54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:15:55 compute-0 fervent_cannon[254182]: 167 167
Jan 23 10:15:55 compute-0 systemd[1]: libpod-54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe.scope: Deactivated successfully.
Jan 23 10:15:55 compute-0 conmon[254182]: conmon 54470a9fae5797c1a7ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe.scope/container/memory.events
Jan 23 10:15:55 compute-0 podman[254166]: 2026-01-23 10:15:55.830208522 +0000 UTC m=+0.133973781 container died 54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cannon, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a09a212142adbd6b3c586bb2bb1c60cf4e20bff87b5b58c7153ff1fde1799f3-merged.mount: Deactivated successfully.
Jan 23 10:15:55 compute-0 podman[254166]: 2026-01-23 10:15:55.863733712 +0000 UTC m=+0.167498971 container remove 54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cannon, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:15:55 compute-0 systemd[1]: libpod-conmon-54470a9fae5797c1a7eec6fe0e1d1cb35eab1ab4613e2632392b42e65f7073fe.scope: Deactivated successfully.
Jan 23 10:15:56 compute-0 ceph-mon[74335]: pgmap v676: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 23 10:15:56 compute-0 podman[254206]: 2026-01-23 10:15:56.037387757 +0000 UTC m=+0.044904724 container create a87963e354a07f815c92ab2634fe6df3c30f49b091a6d107310a754accd7cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:15:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:15:56 compute-0 systemd[1]: Started libpod-conmon-a87963e354a07f815c92ab2634fe6df3c30f49b091a6d107310a754accd7cdb7.scope.
Jan 23 10:15:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cd3358e3c52972929773e7de0c9d9b939621a63878551c3427a4f356fb2c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cd3358e3c52972929773e7de0c9d9b939621a63878551c3427a4f356fb2c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cd3358e3c52972929773e7de0c9d9b939621a63878551c3427a4f356fb2c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cd3358e3c52972929773e7de0c9d9b939621a63878551c3427a4f356fb2c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:56 compute-0 podman[254206]: 2026-01-23 10:15:56.108767302 +0000 UTC m=+0.116284289 container init a87963e354a07f815c92ab2634fe6df3c30f49b091a6d107310a754accd7cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cerf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Jan 23 10:15:56 compute-0 podman[254206]: 2026-01-23 10:15:56.020235071 +0000 UTC m=+0.027752078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:15:56 compute-0 podman[254206]: 2026-01-23 10:15:56.117461478 +0000 UTC m=+0.124978445 container start a87963e354a07f815c92ab2634fe6df3c30f49b091a6d107310a754accd7cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:15:56 compute-0 podman[254206]: 2026-01-23 10:15:56.120383551 +0000 UTC m=+0.127900518 container attach a87963e354a07f815c92ab2634fe6df3c30f49b091a6d107310a754accd7cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 10:15:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:15:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:56.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:15:56 compute-0 amazing_cerf[254223]: {
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:     "1": [
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:         {
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "devices": [
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "/dev/loop3"
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             ],
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "lv_name": "ceph_lv0",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "lv_size": "21470642176",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "name": "ceph_lv0",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "tags": {
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.cluster_name": "ceph",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.crush_device_class": "",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.encrypted": "0",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.osd_id": "1",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.type": "block",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.vdo": "0",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:                 "ceph.with_tpm": "0"
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             },
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "type": "block",
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:             "vg_name": "ceph_vg0"
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:         }
Jan 23 10:15:56 compute-0 amazing_cerf[254223]:     ]
Jan 23 10:15:56 compute-0 amazing_cerf[254223]: }
Jan 23 10:15:56 compute-0 systemd[1]: libpod-a87963e354a07f815c92ab2634fe6df3c30f49b091a6d107310a754accd7cdb7.scope: Deactivated successfully.
Jan 23 10:15:56 compute-0 podman[254206]: 2026-01-23 10:15:56.42258195 +0000 UTC m=+0.430098917 container died a87963e354a07f815c92ab2634fe6df3c30f49b091a6d107310a754accd7cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cerf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 10:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a5cd3358e3c52972929773e7de0c9d9b939621a63878551c3427a4f356fb2c8-merged.mount: Deactivated successfully.
Jan 23 10:15:56 compute-0 podman[254206]: 2026-01-23 10:15:56.462137612 +0000 UTC m=+0.469654579 container remove a87963e354a07f815c92ab2634fe6df3c30f49b091a6d107310a754accd7cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_cerf, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:15:56 compute-0 systemd[1]: libpod-conmon-a87963e354a07f815c92ab2634fe6df3c30f49b091a6d107310a754accd7cdb7.scope: Deactivated successfully.
Jan 23 10:15:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Jan 23 10:15:56 compute-0 sudo[254099]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:56 compute-0 sudo[254243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:15:56 compute-0 sudo[254243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:56 compute-0 sudo[254243]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:56 compute-0 sudo[254268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:15:56 compute-0 sudo[254268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:56 compute-0 nova_compute[249229]: 2026-01-23 10:15:56.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:15:56 compute-0 nova_compute[249229]: 2026-01-23 10:15:56.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:15:56 compute-0 nova_compute[249229]: 2026-01-23 10:15:56.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 10:15:56 compute-0 nova_compute[249229]: 2026-01-23 10:15:56.734 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 10:15:56 compute-0 nova_compute[249229]: 2026-01-23 10:15:56.734 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:15:56 compute-0 nova_compute[249229]: 2026-01-23 10:15:56.735 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 10:15:56 compute-0 nova_compute[249229]: 2026-01-23 10:15:56.746 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:15:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:56 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:57 compute-0 podman[254333]: 2026-01-23 10:15:57.032686223 +0000 UTC m=+0.037690020 container create 21297e54f5f35c631042f1a918fd1704a140af8e3dce1ccc4e32c94c86a9c58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:15:57 compute-0 ceph-mon[74335]: pgmap v677: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Jan 23 10:15:57 compute-0 systemd[1]: Started libpod-conmon-21297e54f5f35c631042f1a918fd1704a140af8e3dce1ccc4e32c94c86a9c58c.scope.
Jan 23 10:15:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:15:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:57.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:15:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:57.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:15:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:15:57.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:15:57 compute-0 podman[254333]: 2026-01-23 10:15:57.018777219 +0000 UTC m=+0.023781036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:15:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:57 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:57 compute-0 podman[254333]: 2026-01-23 10:15:57.213752138 +0000 UTC m=+0.218755945 container init 21297e54f5f35c631042f1a918fd1704a140af8e3dce1ccc4e32c94c86a9c58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_morse, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 10:15:57 compute-0 podman[254333]: 2026-01-23 10:15:57.224315048 +0000 UTC m=+0.229318835 container start 21297e54f5f35c631042f1a918fd1704a140af8e3dce1ccc4e32c94c86a9c58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:15:57 compute-0 elegant_morse[254349]: 167 167
Jan 23 10:15:57 compute-0 systemd[1]: libpod-21297e54f5f35c631042f1a918fd1704a140af8e3dce1ccc4e32c94c86a9c58c.scope: Deactivated successfully.
Jan 23 10:15:57 compute-0 podman[254333]: 2026-01-23 10:15:57.238889611 +0000 UTC m=+0.243893448 container attach 21297e54f5f35c631042f1a918fd1704a140af8e3dce1ccc4e32c94c86a9c58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_morse, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:15:57 compute-0 podman[254333]: 2026-01-23 10:15:57.239867129 +0000 UTC m=+0.244870996 container died 21297e54f5f35c631042f1a918fd1704a140af8e3dce1ccc4e32c94c86a9c58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_morse, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:15:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-740ee64d36df4f6926bd9e3a096b659759cdb0ecbba8c2a348d10763f29594fb-merged.mount: Deactivated successfully.
Jan 23 10:15:57 compute-0 podman[254333]: 2026-01-23 10:15:57.295231289 +0000 UTC m=+0.300235096 container remove 21297e54f5f35c631042f1a918fd1704a140af8e3dce1ccc4e32c94c86a9c58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:15:57 compute-0 systemd[1]: libpod-conmon-21297e54f5f35c631042f1a918fd1704a140af8e3dce1ccc4e32c94c86a9c58c.scope: Deactivated successfully.
Jan 23 10:15:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:57 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:57 compute-0 podman[254374]: 2026-01-23 10:15:57.485257848 +0000 UTC m=+0.051951404 container create e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_neumann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:15:57 compute-0 systemd[1]: Started libpod-conmon-e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2.scope.
Jan 23 10:15:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a19ec19d4e24de09d5988bbfeae9aa30829757c64c124b9bc807710c9e6793/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:57 compute-0 podman[254374]: 2026-01-23 10:15:57.467635259 +0000 UTC m=+0.034328835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a19ec19d4e24de09d5988bbfeae9aa30829757c64c124b9bc807710c9e6793/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a19ec19d4e24de09d5988bbfeae9aa30829757c64c124b9bc807710c9e6793/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a19ec19d4e24de09d5988bbfeae9aa30829757c64c124b9bc807710c9e6793/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:15:57 compute-0 podman[254374]: 2026-01-23 10:15:57.581057075 +0000 UTC m=+0.147750791 container init e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_neumann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Jan 23 10:15:57 compute-0 podman[254374]: 2026-01-23 10:15:57.589015231 +0000 UTC m=+0.155708827 container start e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:15:57 compute-0 podman[254374]: 2026-01-23 10:15:57.594232239 +0000 UTC m=+0.160925825 container attach e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:15:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:57.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:58 compute-0 lvm[254467]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:15:58 compute-0 lvm[254467]: VG ceph_vg0 finished
Jan 23 10:15:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:15:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:58.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:15:58 compute-0 great_neumann[254391]: {}
Jan 23 10:15:58 compute-0 systemd[1]: libpod-e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2.scope: Deactivated successfully.
Jan 23 10:15:58 compute-0 podman[254374]: 2026-01-23 10:15:58.298235005 +0000 UTC m=+0.864928561 container died e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_neumann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 10:15:58 compute-0 systemd[1]: libpod-e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2.scope: Consumed 1.067s CPU time.
Jan 23 10:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-31a19ec19d4e24de09d5988bbfeae9aa30829757c64c124b9bc807710c9e6793-merged.mount: Deactivated successfully.
Jan 23 10:15:58 compute-0 podman[254374]: 2026-01-23 10:15:58.348429548 +0000 UTC m=+0.915123104 container remove e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_neumann, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 10:15:58 compute-0 systemd[1]: libpod-conmon-e9c7ed8e6de4ae82e9d69036632d3610e73f7b0455c2b031dc2c84f5573e49d2.scope: Deactivated successfully.
Jan 23 10:15:58 compute-0 sudo[254268]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:15:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:15:58 compute-0 nova_compute[249229]: 2026-01-23 10:15:58.756 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:15:58 compute-0 nova_compute[249229]: 2026-01-23 10:15:58.790 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:15:58 compute-0 nova_compute[249229]: 2026-01-23 10:15:58.791 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:15:58 compute-0 nova_compute[249229]: 2026-01-23 10:15:58.791 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:15:58 compute-0 nova_compute[249229]: 2026-01-23 10:15:58.791 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:15:58 compute-0 nova_compute[249229]: 2026-01-23 10:15:58.791 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:15:58 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:58 compute-0 sudo[254484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:15:58 compute-0 sudo[254484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:15:58 compute-0 sudo[254484]: pam_unix(sudo:session): session closed for user root
Jan 23 10:15:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:58 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:59 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:15:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650861742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.244 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:15:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:15:59 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.419 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.421 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4905MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.421 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.422 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:15:59 compute-0 ceph-mon[74335]: pgmap v678: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:15:59 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:59 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:15:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3650861742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.635 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.636 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:15:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:15:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:15:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:59.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.703 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing inventories for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 10:15:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:15:59.770 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.770 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating ProviderTree inventory for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.770 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:15:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:15:59.770 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:15:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:15:59.771 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.787 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing aggregate associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.853 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing trait associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 10:15:59 compute-0 nova_compute[249229]: 2026-01-23 10:15:59.869 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:15:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:59] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 23 10:15:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:15:59] "GET /metrics HTTP/1.1" 200 48485 "" "Prometheus/2.51.0"
Jan 23 10:16:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:00.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:16:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003629658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:16:00 compute-0 nova_compute[249229]: 2026-01-23 10:16:00.302 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:00 compute-0 nova_compute[249229]: 2026-01-23 10:16:00.307 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:16:00 compute-0 nova_compute[249229]: 2026-01-23 10:16:00.327 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:16:00 compute-0 nova_compute[249229]: 2026-01-23 10:16:00.329 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:16:00 compute-0 nova_compute[249229]: 2026-01-23 10:16:00.329 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:16:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3003629658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:16:00 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:00 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:00.987 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:16:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:00.988 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:16:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:01 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:01 compute-0 nova_compute[249229]: 2026-01-23 10:16:01.290 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:16:01 compute-0 nova_compute[249229]: 2026-01-23 10:16:01.291 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:16:01 compute-0 nova_compute[249229]: 2026-01-23 10:16:01.291 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:16:01 compute-0 nova_compute[249229]: 2026-01-23 10:16:01.311 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:16:01 compute-0 nova_compute[249229]: 2026-01-23 10:16:01.312 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:16:01 compute-0 nova_compute[249229]: 2026-01-23 10:16:01.313 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:16:01 compute-0 nova_compute[249229]: 2026-01-23 10:16:01.314 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:16:01 compute-0 nova_compute[249229]: 2026-01-23 10:16:01.314 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:16:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:01 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:01.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:01 compute-0 nova_compute[249229]: 2026-01-23 10:16:01.732 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:16:01 compute-0 ceph-mon[74335]: pgmap v679: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:16:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/472721166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:16:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:02.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:16:02 compute-0 nova_compute[249229]: 2026-01-23 10:16:02.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:16:02 compute-0 nova_compute[249229]: 2026-01-23 10:16:02.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:16:02 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4013248704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:16:02 compute-0 ceph-mon[74335]: pgmap v680: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:16:02 compute-0 sudo[254556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:16:02 compute-0 sudo[254556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:16:02 compute-0 sudo[254556]: pam_unix(sudo:session): session closed for user root
Jan 23 10:16:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:02 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:03 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:03 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:03.611Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:16:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:03.612Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:16:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:03.612Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:16:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:04.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101604 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:16:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:16:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/455437505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:16:04 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:04 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e440032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:16:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:16:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:05 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e54004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:05 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e60001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:05 compute-0 ceph-mon[74335]: pgmap v681: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:16:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:16:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3264499221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:16:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:05.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:06.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:16:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:06 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:07.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:07 compute-0 kernel: ganesha.nfsd[252566]: segfault at 50 ip 00007f5efbec632e sp 00007f5e6cff8210 error 4 in libntirpc.so.5.8[7f5efbeab000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 23 10:16:07 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 10:16:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[251496]: 23/01/2026 10:16:07 : epoch 69734995 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5e740040c0 fd 38 proxy ignored for local
Jan 23 10:16:07 compute-0 systemd[1]: Started Process Core Dump (PID 254585/UID 0).
Jan 23 10:16:07 compute-0 ceph-mon[74335]: pgmap v682: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:16:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:07.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:07 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:07.990 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:16:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:08.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:08 compute-0 systemd-coredump[254586]: Process 251500 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007f5efbec632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 10:16:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:16:08 compute-0 systemd[1]: systemd-coredump@7-254585-0.service: Deactivated successfully.
Jan 23 10:16:08 compute-0 systemd[1]: systemd-coredump@7-254585-0.service: Consumed 1.307s CPU time.
Jan 23 10:16:08 compute-0 podman[254593]: 2026-01-23 10:16:08.650577039 +0000 UTC m=+0.034552681 container died 0fddaa8774d77ce08f23ec7c205e86e5445782b9a1aaa38f07872d28a02d4d5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-477d1c01d34cd065bda3a294d6711bbddf8bdb37fff4b81be92f1bc09e11c35d-merged.mount: Deactivated successfully.
Jan 23 10:16:08 compute-0 podman[254593]: 2026-01-23 10:16:08.691129069 +0000 UTC m=+0.075104681 container remove 0fddaa8774d77ce08f23ec7c205e86e5445782b9a1aaa38f07872d28a02d4d5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:16:08 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 10:16:08 compute-0 podman[254608]: 2026-01-23 10:16:08.792383201 +0000 UTC m=+0.085400673 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:16:08 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:16:08 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.958s CPU time.
Jan 23 10:16:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:09.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:09 compute-0 ceph-mon[74335]: pgmap v683: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:16:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:09] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 23 10:16:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:09] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 23 10:16:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:16:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:11.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:11 compute-0 ceph-mon[74335]: pgmap v684: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:16:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:12.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:16:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101613 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:16:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:13.613Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:13 compute-0 ceph-mon[74335]: pgmap v685: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:16:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:14.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:16:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:15.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:15 compute-0 ceph-mon[74335]: pgmap v686: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:16:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:16:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:16.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:16:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:16:16 compute-0 ceph-mon[74335]: pgmap v687: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:16:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:17.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:17 compute-0 podman[254669]: 2026-01-23 10:16:17.518304858 +0000 UTC m=+0.048814436 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 10:16:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:17.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.232 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.233 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.257 249233 DEBUG nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 10:16:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:18.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.359 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.359 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.365 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.366 249233 INFO nova.compute.claims [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Claim successful on node compute-0.ctlplane.example.com
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.480 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:16:18 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 8.
Jan 23 10:16:18 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:16:18 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.958s CPU time.
Jan 23 10:16:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:16:18 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/420910609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:16:18 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.976 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:18 compute-0 nova_compute[249229]: 2026-01-23 10:16:18.982 249233 DEBUG nova.compute.provider_tree [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.131 249233 DEBUG nova.scheduler.client.report [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.159 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.159 249233 DEBUG nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 10:16:19 compute-0 podman[254765]: 2026-01-23 10:16:19.167352075 +0000 UTC m=+0.050581596 container create 0cf8d60bfb72762bf9544d1dbb65a80fa5e606e4b7e050f91cd95cf3caeee354 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae16c0efd8c7a12acb5f3d6cd51f2397e2ef12abc34ee31c35e3751c2d8679a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 10:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae16c0efd8c7a12acb5f3d6cd51f2397e2ef12abc34ee31c35e3751c2d8679a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae16c0efd8c7a12acb5f3d6cd51f2397e2ef12abc34ee31c35e3751c2d8679a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae16c0efd8c7a12acb5f3d6cd51f2397e2ef12abc34ee31c35e3751c2d8679a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.206 249233 DEBUG nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.206 249233 DEBUG nova.network.neutron [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 10:16:19 compute-0 podman[254765]: 2026-01-23 10:16:19.225080672 +0000 UTC m=+0.108310223 container init 0cf8d60bfb72762bf9544d1dbb65a80fa5e606e4b7e050f91cd95cf3caeee354 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.229 249233 INFO nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 10:16:19 compute-0 podman[254765]: 2026-01-23 10:16:19.232659687 +0000 UTC m=+0.115889208 container start 0cf8d60bfb72762bf9544d1dbb65a80fa5e606e4b7e050f91cd95cf3caeee354 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:16:19 compute-0 podman[254765]: 2026-01-23 10:16:19.140928325 +0000 UTC m=+0.024157876 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:16:19 compute-0 bash[254765]: 0cf8d60bfb72762bf9544d1dbb65a80fa5e606e4b7e050f91cd95cf3caeee354
Jan 23 10:16:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 10:16:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 10:16:19 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.247 249233 DEBUG nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 10:16:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 10:16:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 10:16:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 10:16:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 10:16:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 10:16:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.329 249233 DEBUG nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.331 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.331 249233 INFO nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Creating image(s)
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.366 249233 DEBUG nova.storage.rbd_utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.399 249233 DEBUG nova.storage.rbd_utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.429 249233 DEBUG nova.storage.rbd_utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.433 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "379b2821245bc82aa5a95839eddb9a97716b559c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:19 compute-0 nova_compute[249229]: 2026-01-23 10:16:19.434 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "379b2821245bc82aa5a95839eddb9a97716b559c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:19 compute-0 ceph-mon[74335]: pgmap v688: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:16:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/420910609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:16:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:19.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:19] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 23 10:16:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:19] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 23 10:16:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:16:19
Jan 23 10:16:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:16:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:16:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.mgr', 'backups', 'vms', 'cephfs.cephfs.meta', '.nfs', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.log']
Jan 23 10:16:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:16:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:16:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:16:20 compute-0 nova_compute[249229]: 2026-01-23 10:16:20.135 249233 DEBUG nova.virt.libvirt.imagebackend [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Image locations are: [{'url': 'rbd://f3005f84-239a-55b6-a948-8f1fb592b920/images/271ec98e-d058-421b-bbfb-4b4a5954c90a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f3005f84-239a-55b6-a948-8f1fb592b920/images/271ec98e-d058-421b-bbfb-4b4a5954c90a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:16:20 compute-0 nova_compute[249229]: 2026-01-23 10:16:20.209 249233 WARNING oslo_policy.policy [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 23 10:16:20 compute-0 nova_compute[249229]: 2026-01-23 10:16:20.210 249233 WARNING oslo_policy.policy [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 23 10:16:20 compute-0 nova_compute[249229]: 2026-01-23 10:16:20.212 249233 DEBUG nova.policy [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f459c4e71e6c47acb0f8aaf83f34695e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 23 10:16:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:16:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:16:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:16:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101620 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:16:20 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [ALERT] 022/101620 (4) : backend 'backend' has no server available!
Jan 23 10:16:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.160 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.216 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c.part --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.218 249233 DEBUG nova.virt.images [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] 271ec98e-d058-421b-bbfb-4b4a5954c90a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.219 249233 DEBUG nova.privsep.utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.220 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c.part /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.514 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c.part /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c.converted" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.518 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.573 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c.converted --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.574 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "379b2821245bc82aa5a95839eddb9a97716b559c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.600 249233 DEBUG nova.storage.rbd_utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.604 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:21 compute-0 nova_compute[249229]: 2026-01-23 10:16:21.637 249233 DEBUG nova.network.neutron [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Successfully created port: 7db1962f-3a42-428d-955f-aaac0cf186c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 23 10:16:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:21.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 23 10:16:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 23 10:16:21 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 23 10:16:21 compute-0 ceph-mon[74335]: pgmap v689: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:16:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:22.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v691: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 716 B/s wr, 10 op/s
Jan 23 10:16:22 compute-0 nova_compute[249229]: 2026-01-23 10:16:22.727 249233 DEBUG nova.network.neutron [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Successfully updated port: 7db1962f-3a42-428d-955f-aaac0cf186c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 10:16:22 compute-0 nova_compute[249229]: 2026-01-23 10:16:22.742 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:16:22 compute-0 nova_compute[249229]: 2026-01-23 10:16:22.742 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquired lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:16:22 compute-0 nova_compute[249229]: 2026-01-23 10:16:22.743 249233 DEBUG nova.network.neutron [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 10:16:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 23 10:16:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 23 10:16:22 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 23 10:16:23 compute-0 sudo[254932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:16:23 compute-0 sudo[254932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:16:23 compute-0 sudo[254932]: pam_unix(sudo:session): session closed for user root
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.191 249233 DEBUG nova.network.neutron [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.246 249233 DEBUG nova.compute.manager [req-3fab4791-9e78-4d15-8643-494f2b7945f0 req-57122762-a7f3-40fd-a9e5-ad95b4291ab2 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received event network-changed-7db1962f-3a42-428d-955f-aaac0cf186c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.247 249233 DEBUG nova.compute.manager [req-3fab4791-9e78-4d15-8643-494f2b7945f0 req-57122762-a7f3-40fd-a9e5-ad95b4291ab2 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Refreshing instance network info cache due to event network-changed-7db1962f-3a42-428d-955f-aaac0cf186c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.247 249233 DEBUG oslo_concurrency.lockutils [req-3fab4791-9e78-4d15-8643-494f2b7945f0 req-57122762-a7f3-40fd-a9e5-ad95b4291ab2 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:16:23 compute-0 ceph-mon[74335]: osdmap e144: 3 total, 3 up, 3 in
Jan 23 10:16:23 compute-0 ceph-mon[74335]: pgmap v691: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 716 B/s wr, 10 op/s
Jan 23 10:16:23 compute-0 ceph-mon[74335]: osdmap e145: 3 total, 3 up, 3 in
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.425 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.821s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.488 249233 DEBUG nova.storage.rbd_utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] resizing rbd image f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 23 10:16:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:23.614Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.642 249233 DEBUG nova.objects.instance [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'migration_context' on Instance uuid f385572a-ade5-4da0-b6d8-d6bb5cdc919e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.658 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.658 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Ensure instance console log exists: /var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.659 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.659 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.660 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:23.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.841 249233 DEBUG nova.network.neutron [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updating instance_info_cache with network_info: [{"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.860 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Releasing lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.861 249233 DEBUG nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Instance network_info: |[{"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.862 249233 DEBUG oslo_concurrency.lockutils [req-3fab4791-9e78-4d15-8643-494f2b7945f0 req-57122762-a7f3-40fd-a9e5-ad95b4291ab2 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.863 249233 DEBUG nova.network.neutron [req-3fab4791-9e78-4d15-8643-494f2b7945f0 req-57122762-a7f3-40fd-a9e5-ad95b4291ab2 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Refreshing network info cache for port 7db1962f-3a42-428d-955f-aaac0cf186c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.867 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Start _get_guest_xml network_info=[{"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T10:15:36Z,direct_url=<?>,disk_format='qcow2',id=271ec98e-d058-421b-bbfb-4b4a5954c90a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5220cd4f58cb43bb899e367e961bc5c1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T10:15:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'size': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '271ec98e-d058-421b-bbfb-4b4a5954c90a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.873 249233 WARNING nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.879 249233 DEBUG nova.virt.libvirt.host [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.880 249233 DEBUG nova.virt.libvirt.host [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.888 249233 DEBUG nova.virt.libvirt.host [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.889 249233 DEBUG nova.virt.libvirt.host [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.890 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.890 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T10:15:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1d8c8bf4-786e-4009-bc53-f259480fb5b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T10:15:36Z,direct_url=<?>,disk_format='qcow2',id=271ec98e-d058-421b-bbfb-4b4a5954c90a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5220cd4f58cb43bb899e367e961bc5c1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T10:15:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.891 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.891 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.891 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.891 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.892 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.892 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.892 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.893 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.893 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.893 249233 DEBUG nova.virt.hardware [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.897 249233 DEBUG nova.privsep.utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 23 10:16:23 compute-0 nova_compute[249229]: 2026-01-23 10:16:23.897 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:16:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:24.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:16:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 10:16:24 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399276032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.382 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.410 249233 DEBUG nova.storage.rbd_utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.415 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 383 B/s wr, 11 op/s
Jan 23 10:16:24 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1399276032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:16:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 10:16:24 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942849923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.891 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.893 249233 DEBUG nova.virt.libvirt.vif [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-896397354',display_name='tempest-TestNetworkBasicOps-server-896397354',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-896397354',id=1,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBdrrN909tRIL4+NsC0YFSFTi3EuZsB2pesGmAyVsHAWGns8IyxroukgzCqNJ0STgim697i6oxgop6PVFjv6RyikBB+iN2/4f4D0fD1li8fNUXFCCnib2uuGD3w4Sjam9Q==',key_name='tempest-TestNetworkBasicOps-526025578',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-0bqn5yko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:16:19Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=f385572a-ade5-4da0-b6d8-d6bb5cdc919e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.893 249233 DEBUG nova.network.os_vif_util [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.894 249233 DEBUG nova.network.os_vif_util [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:89:66,bridge_name='br-int',has_traffic_filtering=True,id=7db1962f-3a42-428d-955f-aaac0cf186c5,network=Network(4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7db1962f-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.896 249233 DEBUG nova.objects.instance [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'pci_devices' on Instance uuid f385572a-ade5-4da0-b6d8-d6bb5cdc919e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.917 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] End _get_guest_xml xml=<domain type="kvm">
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <uuid>f385572a-ade5-4da0-b6d8-d6bb5cdc919e</uuid>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <name>instance-00000001</name>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <memory>131072</memory>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <vcpu>1</vcpu>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <metadata>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <nova:name>tempest-TestNetworkBasicOps-server-896397354</nova:name>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <nova:creationTime>2026-01-23 10:16:23</nova:creationTime>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <nova:flavor name="m1.nano">
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <nova:memory>128</nova:memory>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <nova:disk>1</nova:disk>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <nova:swap>0</nova:swap>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <nova:ephemeral>0</nova:ephemeral>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <nova:vcpus>1</nova:vcpus>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       </nova:flavor>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <nova:owner>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <nova:user uuid="f459c4e71e6c47acb0f8aaf83f34695e">tempest-TestNetworkBasicOps-655467240-project-member</nova:user>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <nova:project uuid="acc90003f0f7412b8daf8a1b6f0f1494">tempest-TestNetworkBasicOps-655467240</nova:project>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       </nova:owner>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <nova:root type="image" uuid="271ec98e-d058-421b-bbfb-4b4a5954c90a"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <nova:ports>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <nova:port uuid="7db1962f-3a42-428d-955f-aaac0cf186c5">
Jan 23 10:16:24 compute-0 nova_compute[249229]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         </nova:port>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       </nova:ports>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     </nova:instance>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   </metadata>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <sysinfo type="smbios">
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <system>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <entry name="manufacturer">RDO</entry>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <entry name="product">OpenStack Compute</entry>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <entry name="serial">f385572a-ade5-4da0-b6d8-d6bb5cdc919e</entry>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <entry name="uuid">f385572a-ade5-4da0-b6d8-d6bb5cdc919e</entry>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <entry name="family">Virtual Machine</entry>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     </system>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   </sysinfo>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <os>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <boot dev="hd"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <smbios mode="sysinfo"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   </os>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <features>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <acpi/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <apic/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <vmcoreinfo/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   </features>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <clock offset="utc">
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <timer name="hpet" present="no"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   </clock>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <cpu mode="host-model" match="exact">
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   </cpu>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   <devices>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <disk type="network" device="disk">
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <driver type="raw" cache="none"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <source protocol="rbd" name="vms/f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk">
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <host name="192.168.122.100" port="6789"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <host name="192.168.122.102" port="6789"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <host name="192.168.122.101" port="6789"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       </source>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <auth username="openstack">
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <secret type="ceph" uuid="f3005f84-239a-55b6-a948-8f1fb592b920"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       </auth>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <target dev="vda" bus="virtio"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <disk type="network" device="cdrom">
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <driver type="raw" cache="none"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <source protocol="rbd" name="vms/f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk.config">
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <host name="192.168.122.100" port="6789"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <host name="192.168.122.102" port="6789"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <host name="192.168.122.101" port="6789"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       </source>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <auth username="openstack">
Jan 23 10:16:24 compute-0 nova_compute[249229]:         <secret type="ceph" uuid="f3005f84-239a-55b6-a948-8f1fb592b920"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       </auth>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <target dev="sda" bus="sata"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <interface type="ethernet">
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <mac address="fa:16:3e:45:89:66"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <model type="virtio"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <mtu size="1442"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <target dev="tap7db1962f-3a"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     </interface>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <serial type="pty">
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <log file="/var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e/console.log" append="off"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     </serial>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <video>
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <model type="virtio"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     </video>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <input type="tablet" bus="usb"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <rng model="virtio">
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <backend model="random">/dev/urandom</backend>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     </rng>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <controller type="usb" index="0"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     <memballoon model="virtio">
Jan 23 10:16:24 compute-0 nova_compute[249229]:       <stats period="10"/>
Jan 23 10:16:24 compute-0 nova_compute[249229]:     </memballoon>
Jan 23 10:16:24 compute-0 nova_compute[249229]:   </devices>
Jan 23 10:16:24 compute-0 nova_compute[249229]: </domain>
Jan 23 10:16:24 compute-0 nova_compute[249229]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.919 249233 DEBUG nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Preparing to wait for external event network-vif-plugged-7db1962f-3a42-428d-955f-aaac0cf186c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.919 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.919 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.920 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.920 249233 DEBUG nova.virt.libvirt.vif [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-896397354',display_name='tempest-TestNetworkBasicOps-server-896397354',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-896397354',id=1,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBdrrN909tRIL4+NsC0YFSFTi3EuZsB2pesGmAyVsHAWGns8IyxroukgzCqNJ0STgim697i6oxgop6PVFjv6RyikBB+iN2/4f4D0fD1li8fNUXFCCnib2uuGD3w4Sjam9Q==',key_name='tempest-TestNetworkBasicOps-526025578',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-0bqn5yko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:16:19Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=f385572a-ade5-4da0-b6d8-d6bb5cdc919e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.921 249233 DEBUG nova.network.os_vif_util [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.921 249233 DEBUG nova.network.os_vif_util [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:89:66,bridge_name='br-int',has_traffic_filtering=True,id=7db1962f-3a42-428d-955f-aaac0cf186c5,network=Network(4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7db1962f-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.921 249233 DEBUG os_vif [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:89:66,bridge_name='br-int',has_traffic_filtering=True,id=7db1962f-3a42-428d-955f-aaac0cf186c5,network=Network(4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7db1962f-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.964 249233 DEBUG ovsdbapp.backend.ovs_idl [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.964 249233 DEBUG ovsdbapp.backend.ovs_idl [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.965 249233 DEBUG ovsdbapp.backend.ovs_idl [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.965 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.966 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.966 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.967 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.968 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.970 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.984 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.985 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.985 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 10:16:24 compute-0 nova_compute[249229]: 2026-01-23 10:16:24.987 249233 INFO oslo.privsep.daemon [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp_1x_jasi/privsep.sock']
Jan 23 10:16:25 compute-0 nova_compute[249229]: 2026-01-23 10:16:25.245 249233 DEBUG nova.network.neutron [req-3fab4791-9e78-4d15-8643-494f2b7945f0 req-57122762-a7f3-40fd-a9e5-ad95b4291ab2 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updated VIF entry in instance network info cache for port 7db1962f-3a42-428d-955f-aaac0cf186c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:16:25 compute-0 nova_compute[249229]: 2026-01-23 10:16:25.247 249233 DEBUG nova.network.neutron [req-3fab4791-9e78-4d15-8643-494f2b7945f0 req-57122762-a7f3-40fd-a9e5-ad95b4291ab2 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updating instance_info_cache with network_info: [{"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:16:25 compute-0 nova_compute[249229]: 2026-01-23 10:16:25.408 249233 DEBUG oslo_concurrency.lockutils [req-3fab4791-9e78-4d15-8643-494f2b7945f0 req-57122762-a7f3-40fd-a9e5-ad95b4291ab2 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:16:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:25 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 23 10:16:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:25 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 23 10:16:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:25 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:16:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:25 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:16:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:25 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 23 10:16:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:25 compute-0 nova_compute[249229]: 2026-01-23 10:16:25.703 249233 INFO oslo.privsep.daemon [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Spawned new privsep daemon via rootwrap
Jan 23 10:16:25 compute-0 nova_compute[249229]: 2026-01-23 10:16:25.577 255097 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 10:16:25 compute-0 nova_compute[249229]: 2026-01-23 10:16:25.581 255097 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 10:16:25 compute-0 nova_compute[249229]: 2026-01-23 10:16:25.583 255097 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 23 10:16:25 compute-0 nova_compute[249229]: 2026-01-23 10:16:25.583 255097 INFO oslo.privsep.daemon [-] privsep daemon running as pid 255097
Jan 23 10:16:25 compute-0 ceph-mon[74335]: pgmap v693: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 383 B/s wr, 11 op/s
Jan 23 10:16:25 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1942849923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:16:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.081 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.082 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7db1962f-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.083 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7db1962f-3a, col_values=(('external_ids', {'iface-id': '7db1962f-3a42-428d-955f-aaac0cf186c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:89:66', 'vm-uuid': 'f385572a-ade5-4da0-b6d8-d6bb5cdc919e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.084 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:26 compute-0 NetworkManager[48866]: <info>  [1769163386.0859] manager: (tap7db1962f-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.088 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.092 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.093 249233 INFO os_vif [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:89:66,bridge_name='br-int',has_traffic_filtering=True,id=7db1962f-3a42-428d-955f-aaac0cf186c5,network=Network(4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7db1962f-3a')
Jan 23 10:16:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:26 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:16:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:26 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:16:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:26 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:16:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:26.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.337 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.337 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.338 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No VIF found with MAC fa:16:3e:45:89:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.339 249233 INFO nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Using config drive
Jan 23 10:16:26 compute-0 nova_compute[249229]: 2026-01-23 10:16:26.369 249233 DEBUG nova.storage.rbd_utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:16:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Jan 23 10:16:27 compute-0 nova_compute[249229]: 2026-01-23 10:16:27.082 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:27.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:27 compute-0 ceph-mon[74335]: pgmap v694: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Jan 23 10:16:27 compute-0 nova_compute[249229]: 2026-01-23 10:16:27.451 249233 INFO nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Creating config drive at /var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e/disk.config
Jan 23 10:16:27 compute-0 nova_compute[249229]: 2026-01-23 10:16:27.455 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4egpm1ns execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:27 compute-0 nova_compute[249229]: 2026-01-23 10:16:27.578 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4egpm1ns" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:27 compute-0 nova_compute[249229]: 2026-01-23 10:16:27.613 249233 DEBUG nova.storage.rbd_utils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:16:27 compute-0 nova_compute[249229]: 2026-01-23 10:16:27.617 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e/disk.config f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:28 compute-0 nova_compute[249229]: 2026-01-23 10:16:28.251 249233 DEBUG oslo_concurrency.processutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e/disk.config f385572a-ade5-4da0-b6d8-d6bb5cdc919e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:28 compute-0 nova_compute[249229]: 2026-01-23 10:16:28.252 249233 INFO nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Deleting local config drive /var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e/disk.config because it was imported into RBD.
Jan 23 10:16:28 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 23 10:16:28 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 23 10:16:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:28.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101628 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:16:28 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 23 10:16:28 compute-0 kernel: tap7db1962f-3a: entered promiscuous mode
Jan 23 10:16:28 compute-0 NetworkManager[48866]: <info>  [1769163388.3710] manager: (tap7db1962f-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 23 10:16:28 compute-0 ovn_controller[151634]: 2026-01-23T10:16:28Z|00027|binding|INFO|Claiming lport 7db1962f-3a42-428d-955f-aaac0cf186c5 for this chassis.
Jan 23 10:16:28 compute-0 ovn_controller[151634]: 2026-01-23T10:16:28Z|00028|binding|INFO|7db1962f-3a42-428d-955f-aaac0cf186c5: Claiming fa:16:3e:45:89:66 10.100.0.14
Jan 23 10:16:28 compute-0 nova_compute[249229]: 2026-01-23 10:16:28.371 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:28 compute-0 nova_compute[249229]: 2026-01-23 10:16:28.375 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:28 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:28.389 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:89:66 10.100.0.14'], port_security=['fa:16:3e:45:89:66 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f385572a-ade5-4da0-b6d8-d6bb5cdc919e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'neutron:revision_number': '2', 'neutron:security_group_ids': '22a38a63-e659-46d6-a24c-d4af0f15baaf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ae92df3-ded7-43fb-bc56-81665ce8e357, chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], logical_port=7db1962f-3a42-428d-955f-aaac0cf186c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:16:28 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:28.390 161921 INFO neutron.agent.ovn.metadata.agent [-] Port 7db1962f-3a42-428d-955f-aaac0cf186c5 in datapath 4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3 bound to our chassis
Jan 23 10:16:28 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:28.392 161921 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3
Jan 23 10:16:28 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:28.394 161921 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp5z6r0kvj/privsep.sock']
Jan 23 10:16:28 compute-0 systemd-udevd[255198]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:16:28 compute-0 NetworkManager[48866]: <info>  [1769163388.4146] device (tap7db1962f-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 10:16:28 compute-0 NetworkManager[48866]: <info>  [1769163388.4152] device (tap7db1962f-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 10:16:28 compute-0 systemd-machined[216411]: New machine qemu-1-instance-00000001.
Jan 23 10:16:28 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 23 10:16:28 compute-0 nova_compute[249229]: 2026-01-23 10:16:28.457 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:28 compute-0 ovn_controller[151634]: 2026-01-23T10:16:28Z|00029|binding|INFO|Setting lport 7db1962f-3a42-428d-955f-aaac0cf186c5 ovn-installed in OVS
Jan 23 10:16:28 compute-0 ovn_controller[151634]: 2026-01-23T10:16:28Z|00030|binding|INFO|Setting lport 7db1962f-3a42-428d-955f-aaac0cf186c5 up in Southbound
Jan 23 10:16:28 compute-0 nova_compute[249229]: 2026-01-23 10:16:28.464 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:29.168 161921 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:29.169 161921 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp5z6r0kvj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:28.956 255218 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:28.960 255218 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:28.963 255218 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:28.963 255218 INFO oslo.privsep.daemon [-] privsep daemon running as pid 255218
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:29.172 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc3cc5f-324d-4799-9dc1-14d69abc4ebd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.182 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769163389.1813095, f385572a-ade5-4da0-b6d8-d6bb5cdc919e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.183 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] VM Started (Lifecycle Event)
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.238 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.242 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769163389.1816602, f385572a-ade5-4da0-b6d8-d6bb5cdc919e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.243 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] VM Paused (Lifecycle Event)
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.260 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.264 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.281 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.351 249233 DEBUG nova.compute.manager [req-faa1ba1a-5281-4dca-8f93-8a8fedb7cbe4 req-ec62f166-a76f-436c-9061-cff64e8c32d6 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received event network-vif-plugged-7db1962f-3a42-428d-955f-aaac0cf186c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.351 249233 DEBUG oslo_concurrency.lockutils [req-faa1ba1a-5281-4dca-8f93-8a8fedb7cbe4 req-ec62f166-a76f-436c-9061-cff64e8c32d6 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.352 249233 DEBUG oslo_concurrency.lockutils [req-faa1ba1a-5281-4dca-8f93-8a8fedb7cbe4 req-ec62f166-a76f-436c-9061-cff64e8c32d6 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.352 249233 DEBUG oslo_concurrency.lockutils [req-faa1ba1a-5281-4dca-8f93-8a8fedb7cbe4 req-ec62f166-a76f-436c-9061-cff64e8c32d6 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.352 249233 DEBUG nova.compute.manager [req-faa1ba1a-5281-4dca-8f93-8a8fedb7cbe4 req-ec62f166-a76f-436c-9061-cff64e8c32d6 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Processing event network-vif-plugged-7db1962f-3a42-428d-955f-aaac0cf186c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.353 249233 DEBUG nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.368 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.369 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769163389.3680766, f385572a-ade5-4da0-b6d8-d6bb5cdc919e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.370 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] VM Resumed (Lifecycle Event)
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.374 249233 INFO nova.virt.libvirt.driver [-] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Instance spawned successfully.
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.375 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.403 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.409 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.413 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.414 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.415 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.415 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.416 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.416 249233 DEBUG nova.virt.libvirt.driver [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.442 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.584 249233 INFO nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Took 10.25 seconds to spawn the instance on the hypervisor.
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.586 249233 DEBUG nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:16:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:16:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:29.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:29.782 255218 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:29.782 255218 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:29 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:29.782 255218 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:29 compute-0 ceph-mon[74335]: pgmap v695: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 54 op/s
Jan 23 10:16:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:29] "GET /metrics HTTP/1.1" 200 48406 "" "Prometheus/2.51.0"
Jan 23 10:16:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:29] "GET /metrics HTTP/1.1" 200 48406 "" "Prometheus/2.51.0"
Jan 23 10:16:29 compute-0 nova_compute[249229]: 2026-01-23 10:16:29.996 249233 INFO nova.compute.manager [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Took 11.67 seconds to build instance.
Jan 23 10:16:30 compute-0 nova_compute[249229]: 2026-01-23 10:16:30.025 249233 DEBUG oslo_concurrency.lockutils [None req-0eae4015-f763-4293-97d4-c4b6e1b67e01 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:30.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:30.465 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d0c50e-3e9a-4fa0-916c-4a4a26377576]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:30.466 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f4a5a80-81 in ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 23 10:16:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:30.468 255218 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f4a5a80-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 23 10:16:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:30.468 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[a10fe1e1-ca93-4e36-acc1-ccc45f6fe604]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:30.471 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[4631806c-f023-4b6f-9068-c5ffc71c43c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:30.498 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9beba4-f69d-48a7-856d-567f17a745d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.4 MiB/s wr, 39 op/s
Jan 23 10:16:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:30.525 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[abb9cf12-e7d9-4ed2-834d-8ec65a480602]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:30.527 161921 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpv8hp88b7/privsep.sock']
Jan 23 10:16:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 23 10:16:31 compute-0 nova_compute[249229]: 2026-01-23 10:16:31.084 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.263 161921 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.264 161921 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpv8hp88b7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.115 255276 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.119 255276 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.121 255276 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.121 255276 INFO oslo.privsep.daemon [-] privsep daemon running as pid 255276
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.267 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[5a91ca39-d8e2-4ba5-830f-9e82c6ee4fe5]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:31 compute-0 ceph-mon[74335]: pgmap v696: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.4 MiB/s wr, 39 op/s
Jan 23 10:16:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 23 10:16:31 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 23 10:16:31 compute-0 nova_compute[249229]: 2026-01-23 10:16:31.456 249233 DEBUG nova.compute.manager [req-a57387c2-9581-4141-88a7-2703aae32cf6 req-fd23e9f7-c60b-4d05-b682-df83dcb945b8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received event network-vif-plugged-7db1962f-3a42-428d-955f-aaac0cf186c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:16:31 compute-0 nova_compute[249229]: 2026-01-23 10:16:31.457 249233 DEBUG oslo_concurrency.lockutils [req-a57387c2-9581-4141-88a7-2703aae32cf6 req-fd23e9f7-c60b-4d05-b682-df83dcb945b8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:31 compute-0 nova_compute[249229]: 2026-01-23 10:16:31.457 249233 DEBUG oslo_concurrency.lockutils [req-a57387c2-9581-4141-88a7-2703aae32cf6 req-fd23e9f7-c60b-4d05-b682-df83dcb945b8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:31 compute-0 nova_compute[249229]: 2026-01-23 10:16:31.457 249233 DEBUG oslo_concurrency.lockutils [req-a57387c2-9581-4141-88a7-2703aae32cf6 req-fd23e9f7-c60b-4d05-b682-df83dcb945b8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:31 compute-0 nova_compute[249229]: 2026-01-23 10:16:31.458 249233 DEBUG nova.compute.manager [req-a57387c2-9581-4141-88a7-2703aae32cf6 req-fd23e9f7-c60b-4d05-b682-df83dcb945b8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] No waiting events found dispatching network-vif-plugged-7db1962f-3a42-428d-955f-aaac0cf186c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:16:31 compute-0 nova_compute[249229]: 2026-01-23 10:16:31.458 249233 WARNING nova.compute.manager [req-a57387c2-9581-4141-88a7-2703aae32cf6 req-fd23e9f7-c60b-4d05-b682-df83dcb945b8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received unexpected event network-vif-plugged-7db1962f-3a42-428d-955f-aaac0cf186c5 for instance with vm_state active and task_state None.
Jan 23 10:16:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:31.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.860 255276 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.861 255276 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:31 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:31.861 255276 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:32 compute-0 nova_compute[249229]: 2026-01-23 10:16:32.084 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001e:nfs.cephfs.2: -2
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:16:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:32.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:16:32 compute-0 ceph-mon[74335]: osdmap e146: 3 total, 3 up, 3 in
Jan 23 10:16:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:32 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.483 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[334f7a08-37b5-4f02-b517-894631fa6d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 NetworkManager[48866]: <info>  [1769163392.5042] manager: (tap4f4a5a80-80): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.502 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[73486103-9420-4bcb-a030-ffce500a4641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Jan 23 10:16:32 compute-0 systemd-udevd[255302]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.537 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9153c2-dfec-4641-9766-d92c8ef7b0da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.545 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5b09b7-0465-49da-9547-4ece81b49526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 NetworkManager[48866]: <info>  [1769163392.5765] device (tap4f4a5a80-80): carrier: link connected
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.577 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[aff61a96-7ab6-40b1-97fa-6b74725e4114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.594 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[633ed8ab-590c-4357-86c7-5367a05dd586]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f4a5a80-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:47:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451410, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255320, 'error': None, 'target': 'ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.610 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[75e8dd0f-75f3-4bb3-9416-10e584e1bbd1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:4772'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451410, 'tstamp': 451410}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255321, 'error': None, 'target': 'ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.625 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[b9841509-fab9-4ce4-a537-a72aa2c0ee6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f4a5a80-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:47:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451410, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255322, 'error': None, 'target': 'ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.666 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[9a31abd7-f7cc-41e7-83e9-b2eca4b199ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.722 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[0e850bc4-01cf-417a-b086-ecc85c6d641e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.724 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f4a5a80-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.725 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.725 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f4a5a80-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:16:32 compute-0 nova_compute[249229]: 2026-01-23 10:16:32.727 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:32 compute-0 NetworkManager[48866]: <info>  [1769163392.7281] manager: (tap4f4a5a80-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 23 10:16:32 compute-0 kernel: tap4f4a5a80-80: entered promiscuous mode
Jan 23 10:16:32 compute-0 nova_compute[249229]: 2026-01-23 10:16:32.732 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.733 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f4a5a80-80, col_values=(('external_ids', {'iface-id': 'a5ec1982-fcf7-420c-bc38-1abd9fc4085a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:16:32 compute-0 nova_compute[249229]: 2026-01-23 10:16:32.735 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:32 compute-0 ovn_controller[151634]: 2026-01-23T10:16:32Z|00031|binding|INFO|Releasing lport a5ec1982-fcf7-420c-bc38-1abd9fc4085a from this chassis (sb_readonly=0)
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.737 161921 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.737 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[f96017a8-bcd9-4047-bc0a-6710237cbbc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.739 161921 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: global
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     log         /dev/log local0 debug
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     log-tag     haproxy-metadata-proxy-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     user        root
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     group       root
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     maxconn     1024
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     pidfile     /var/lib/neutron/external/pids/4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3.pid.haproxy
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     daemon
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: defaults
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     log global
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     mode http
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     option httplog
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     option dontlognull
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     option http-server-close
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     option forwardfor
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     retries                 3
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     timeout http-request    30s
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     timeout connect         30s
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     timeout client          32s
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     timeout server          32s
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     timeout http-keep-alive 30s
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: listen listener
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     bind 169.254.169.254:80
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     server metadata /var/lib/neutron/metadata_proxy
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:     http-request add-header X-OVN-Network-ID 4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 23 10:16:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:32.741 161921 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3', 'env', 'PROCESS_TAG=haproxy-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 23 10:16:32 compute-0 nova_compute[249229]: 2026-01-23 10:16:32.751 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:33 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:33 compute-0 podman[255359]: 2026-01-23 10:16:33.142185662 +0000 UTC m=+0.048100445 container create 836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 10:16:33 compute-0 systemd[1]: Started libpod-conmon-836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27.scope.
Jan 23 10:16:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d486e17e0f42eb5bb3e615861903d7e136a404c852ac295efd4d0efa0c32c7ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 10:16:33 compute-0 podman[255359]: 2026-01-23 10:16:33.11744122 +0000 UTC m=+0.023356003 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 23 10:16:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:33 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001950 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:33 compute-0 podman[255359]: 2026-01-23 10:16:33.21862719 +0000 UTC m=+0.124541993 container init 836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 10:16:33 compute-0 podman[255359]: 2026-01-23 10:16:33.224818215 +0000 UTC m=+0.130732998 container start 836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:16:33 compute-0 neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3[255374]: [NOTICE]   (255378) : New worker (255380) forked
Jan 23 10:16:33 compute-0 neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3[255374]: [NOTICE]   (255378) : Loading success.
Jan 23 10:16:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:33 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <info>  [1769163393.5682] manager: (patch-provnet-995e8c2d-ca55-405c-bf26-97e408875e42-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <info>  [1769163393.5687] device (patch-provnet-995e8c2d-ca55-405c-bf26-97e408875e42-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <warn>  [1769163393.5690] device (patch-provnet-995e8c2d-ca55-405c-bf26-97e408875e42-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <info>  [1769163393.5697] manager: (patch-br-int-to-provnet-995e8c2d-ca55-405c-bf26-97e408875e42): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <info>  [1769163393.5700] device (patch-br-int-to-provnet-995e8c2d-ca55-405c-bf26-97e408875e42)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <warn>  [1769163393.5701] device (patch-br-int-to-provnet-995e8c2d-ca55-405c-bf26-97e408875e42)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <info>  [1769163393.5707] manager: (patch-provnet-995e8c2d-ca55-405c-bf26-97e408875e42-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <info>  [1769163393.5712] manager: (patch-br-int-to-provnet-995e8c2d-ca55-405c-bf26-97e408875e42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 23 10:16:33 compute-0 nova_compute[249229]: 2026-01-23 10:16:33.565 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <info>  [1769163393.5719] device (patch-provnet-995e8c2d-ca55-405c-bf26-97e408875e42-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 10:16:33 compute-0 NetworkManager[48866]: <info>  [1769163393.5723] device (patch-br-int-to-provnet-995e8c2d-ca55-405c-bf26-97e408875e42)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 10:16:33 compute-0 ovn_controller[151634]: 2026-01-23T10:16:33Z|00032|binding|INFO|Releasing lport a5ec1982-fcf7-420c-bc38-1abd9fc4085a from this chassis (sb_readonly=0)
Jan 23 10:16:33 compute-0 ovn_controller[151634]: 2026-01-23T10:16:33Z|00033|binding|INFO|Releasing lport a5ec1982-fcf7-420c-bc38-1abd9fc4085a from this chassis (sb_readonly=0)
Jan 23 10:16:33 compute-0 nova_compute[249229]: 2026-01-23 10:16:33.597 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:33 compute-0 nova_compute[249229]: 2026-01-23 10:16:33.600 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:33.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:16:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:33.616Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:16:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:33.616Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:33.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:34 compute-0 ceph-mon[74335]: pgmap v698: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Jan 23 10:16:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:34.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 23 10:16:34 compute-0 nova_compute[249229]: 2026-01-23 10:16:34.838 249233 DEBUG nova.compute.manager [req-7d846cd7-9811-4d8a-a1bc-6911d1d884b5 req-621c87bf-a2bf-40e0-9131-9cd6421690ef 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received event network-changed-7db1962f-3a42-428d-955f-aaac0cf186c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:16:34 compute-0 nova_compute[249229]: 2026-01-23 10:16:34.838 249233 DEBUG nova.compute.manager [req-7d846cd7-9811-4d8a-a1bc-6911d1d884b5 req-621c87bf-a2bf-40e0-9131-9cd6421690ef 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Refreshing instance network info cache due to event network-changed-7db1962f-3a42-428d-955f-aaac0cf186c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:16:34 compute-0 nova_compute[249229]: 2026-01-23 10:16:34.839 249233 DEBUG oslo_concurrency.lockutils [req-7d846cd7-9811-4d8a-a1bc-6911d1d884b5 req-621c87bf-a2bf-40e0-9131-9cd6421690ef 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:16:34 compute-0 nova_compute[249229]: 2026-01-23 10:16:34.839 249233 DEBUG oslo_concurrency.lockutils [req-7d846cd7-9811-4d8a-a1bc-6911d1d884b5 req-621c87bf-a2bf-40e0-9131-9cd6421690ef 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:16:34 compute-0 nova_compute[249229]: 2026-01-23 10:16:34.839 249233 DEBUG nova.network.neutron [req-7d846cd7-9811-4d8a-a1bc-6911d1d884b5 req-621c87bf-a2bf-40e0-9131-9cd6421690ef 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Refreshing network info cache for port 7db1962f-3a42-428d-955f-aaac0cf186c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:16:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:35 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:16:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:16:35 compute-0 ceph-mon[74335]: pgmap v699: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 23 10:16:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:16:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101635 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:16:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:35 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:35 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001950 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:35 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:16:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:35 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:16:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:35.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:36 compute-0 nova_compute[249229]: 2026-01-23 10:16:36.087 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.211400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163396211529, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1185, "num_deletes": 260, "total_data_size": 1969475, "memory_usage": 1992856, "flush_reason": "Manual Compaction"}
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163396257892, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1940856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22627, "largest_seqno": 23811, "table_properties": {"data_size": 1935237, "index_size": 2950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12062, "raw_average_key_size": 19, "raw_value_size": 1923624, "raw_average_value_size": 3077, "num_data_blocks": 130, "num_entries": 625, "num_filter_entries": 625, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163301, "oldest_key_time": 1769163301, "file_creation_time": 1769163396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 46546 microseconds, and 16250 cpu microseconds.
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.257946) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1940856 bytes OK
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.257971) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.307424) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.307466) EVENT_LOG_v1 {"time_micros": 1769163396307458, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.307485) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1964124, prev total WAL file size 1964124, number of live WAL files 2.
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.308078) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323535' seq:72057594037927935, type:22 .. '6C6F676D00353131' seq:0, type:0; will stop at (end)
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1895KB)], [50(10MB)]
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163396308168, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13334252, "oldest_snapshot_seqno": -1}
Jan 23 10:16:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:36.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 92 op/s
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5380 keys, 13142619 bytes, temperature: kUnknown
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163396518190, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13142619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13106245, "index_size": 21800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 137751, "raw_average_key_size": 25, "raw_value_size": 13008180, "raw_average_value_size": 2417, "num_data_blocks": 888, "num_entries": 5380, "num_filter_entries": 5380, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769163396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.518462) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13142619 bytes
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.722443) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.5 rd, 62.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 10.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(13.6) write-amplify(6.8) OK, records in: 5920, records dropped: 540 output_compression: NoCompression
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.722502) EVENT_LOG_v1 {"time_micros": 1769163396722481, "job": 26, "event": "compaction_finished", "compaction_time_micros": 210090, "compaction_time_cpu_micros": 24823, "output_level": 6, "num_output_files": 1, "total_output_size": 13142619, "num_input_records": 5920, "num_output_records": 5380, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163396723246, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163396726956, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.308032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.727002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.727009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.727013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.727016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:16:36 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:16:36.727019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:16:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:37 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:37 compute-0 nova_compute[249229]: 2026-01-23 10:16:37.086 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:37.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:37 compute-0 nova_compute[249229]: 2026-01-23 10:16:37.145 249233 DEBUG nova.network.neutron [req-7d846cd7-9811-4d8a-a1bc-6911d1d884b5 req-621c87bf-a2bf-40e0-9131-9cd6421690ef 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updated VIF entry in instance network info cache for port 7db1962f-3a42-428d-955f-aaac0cf186c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:16:37 compute-0 nova_compute[249229]: 2026-01-23 10:16:37.146 249233 DEBUG nova.network.neutron [req-7d846cd7-9811-4d8a-a1bc-6911d1d884b5 req-621c87bf-a2bf-40e0-9131-9cd6421690ef 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updating instance_info_cache with network_info: [{"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:16:37 compute-0 nova_compute[249229]: 2026-01-23 10:16:37.164 249233 DEBUG oslo_concurrency.lockutils [req-7d846cd7-9811-4d8a-a1bc-6911d1d884b5 req-621c87bf-a2bf-40e0-9131-9cd6421690ef 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:16:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:37 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:37 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:37 compute-0 ceph-mon[74335]: pgmap v700: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 92 op/s
Jan 23 10:16:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:37.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:38.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:38 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:16:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 92 op/s
Jan 23 10:16:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:39 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001950 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:39 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:39 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:39 compute-0 podman[255396]: 2026-01-23 10:16:39.568657577 +0000 UTC m=+0.090715444 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 10:16:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:39.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:39 compute-0 ceph-mon[74335]: pgmap v701: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 92 op/s
Jan 23 10:16:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:39] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Jan 23 10:16:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:39] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Jan 23 10:16:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:40.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 92 op/s
Jan 23 10:16:40 compute-0 ceph-mon[74335]: pgmap v702: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 92 op/s
Jan 23 10:16:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:41 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:41 compute-0 nova_compute[249229]: 2026-01-23 10:16:41.089 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:41 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:41 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:41.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:42 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 10:16:42 compute-0 nova_compute[249229]: 2026-01-23 10:16:42.087 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:42.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v703: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 548 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Jan 23 10:16:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101642 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:16:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:43 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:43 compute-0 sudo[255429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:16:43 compute-0 sudo[255429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:16:43 compute-0 sudo[255429]: pam_unix(sudo:session): session closed for user root
Jan 23 10:16:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:43 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:43 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:43.618Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:43 compute-0 ceph-mon[74335]: pgmap v703: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 548 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Jan 23 10:16:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:43.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:16:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:44.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:16:44 compute-0 ovn_controller[151634]: 2026-01-23T10:16:44Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:89:66 10.100.0.14
Jan 23 10:16:44 compute-0 ovn_controller[151634]: 2026-01-23T10:16:44Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:89:66 10.100.0.14
Jan 23 10:16:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 767 B/s wr, 9 op/s
Jan 23 10:16:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:45 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:45 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:45 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:45 compute-0 ceph-mon[74335]: pgmap v704: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 767 B/s wr, 9 op/s
Jan 23 10:16:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:45.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:46 compute-0 nova_compute[249229]: 2026-01-23 10:16:46.091 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:46.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 23 10:16:46 compute-0 ceph-mon[74335]: pgmap v705: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 23 10:16:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:47 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:47 compute-0 nova_compute[249229]: 2026-01-23 10:16:47.091 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:47.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:47 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:47 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:47.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:48.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 10:16:48 compute-0 podman[255459]: 2026-01-23 10:16:48.54055082 +0000 UTC m=+0.065645813 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 10:16:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:16:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3378120931' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:16:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:16:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3378120931' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:16:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:49 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:49 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:49 compute-0 ceph-mon[74335]: pgmap v706: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 10:16:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:49 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:49.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:49 compute-0 nova_compute[249229]: 2026-01-23 10:16:49.898 249233 INFO nova.compute.manager [None req-e837a7e6-07fa-4e97-8947-aeeaca82762d f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Get console output
Jan 23 10:16:49 compute-0 nova_compute[249229]: 2026-01-23 10:16:49.904 249233 INFO oslo.privsep.daemon [None req-e837a7e6-07fa-4e97-8947-aeeaca82762d f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpsh92gw98/privsep.sock']
Jan 23 10:16:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:49] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Jan 23 10:16:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:49] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Jan 23 10:16:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:16:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:16:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:16:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:16:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:16:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:16:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:16:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:16:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:16:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:16:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 10:16:50 compute-0 nova_compute[249229]: 2026-01-23 10:16:50.620 249233 INFO oslo.privsep.daemon [None req-e837a7e6-07fa-4e97-8947-aeeaca82762d f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Spawned new privsep daemon via rootwrap
Jan 23 10:16:50 compute-0 nova_compute[249229]: 2026-01-23 10:16:50.472 255486 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 23 10:16:50 compute-0 nova_compute[249229]: 2026-01-23 10:16:50.476 255486 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 23 10:16:50 compute-0 nova_compute[249229]: 2026-01-23 10:16:50.479 255486 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 23 10:16:50 compute-0 nova_compute[249229]: 2026-01-23 10:16:50.479 255486 INFO oslo.privsep.daemon [-] privsep daemon running as pid 255486
Jan 23 10:16:50 compute-0 nova_compute[249229]: 2026-01-23 10:16:50.723 255486 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 23 10:16:50 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3378120931' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:16:50 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3378120931' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:16:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:16:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:51 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:51 compute-0 nova_compute[249229]: 2026-01-23 10:16:51.118 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:51 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:51 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:51.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:52 compute-0 ceph-mon[74335]: pgmap v707: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 10:16:52 compute-0 nova_compute[249229]: 2026-01-23 10:16:52.093 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 10:16:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:53 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:53 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:53 compute-0 ceph-mon[74335]: pgmap v708: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 10:16:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:53 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:53.619Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:53.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 10:16:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:55 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:55 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:55 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:55 compute-0 ceph-mon[74335]: pgmap v709: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 10:16:56 compute-0 nova_compute[249229]: 2026-01-23 10:16:56.171 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:16:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:56.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 10:16:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:57 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:57 compute-0 nova_compute[249229]: 2026-01-23 10:16:57.097 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:16:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:16:57.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:16:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:57 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:57 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:16:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:57.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:16:58 compute-0 ceph-mon[74335]: pgmap v710: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 10:16:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:58.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Jan 23 10:16:58 compute-0 nova_compute[249229]: 2026-01-23 10:16:58.732 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:16:58 compute-0 nova_compute[249229]: 2026-01-23 10:16:58.733 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:16:58 compute-0 nova_compute[249229]: 2026-01-23 10:16:58.767 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:58 compute-0 nova_compute[249229]: 2026-01-23 10:16:58.768 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:58 compute-0 nova_compute[249229]: 2026-01-23 10:16:58.768 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:58 compute-0 nova_compute[249229]: 2026-01-23 10:16:58.768 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:16:58 compute-0 nova_compute[249229]: 2026-01-23 10:16:58.768 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:16:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:59 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:16:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666916957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:16:59 compute-0 sudo[255516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:16:59 compute-0 sudo[255516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:16:59 compute-0 sudo[255516]: pam_unix(sudo:session): session closed for user root
Jan 23 10:16:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:59 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:59 compute-0 nova_compute[249229]: 2026-01-23 10:16:59.243 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:16:59 compute-0 sudo[255543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 10:16:59 compute-0 sudo[255543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:16:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:16:59 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:16:59 compute-0 nova_compute[249229]: 2026-01-23 10:16:59.486 249233 DEBUG nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 23 10:16:59 compute-0 nova_compute[249229]: 2026-01-23 10:16:59.487 249233 DEBUG nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 23 10:16:59 compute-0 nova_compute[249229]: 2026-01-23 10:16:59.658 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:16:59 compute-0 nova_compute[249229]: 2026-01-23 10:16:59.660 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4428MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:16:59 compute-0 nova_compute[249229]: 2026-01-23 10:16:59.660 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:59 compute-0 nova_compute[249229]: 2026-01-23 10:16:59.661 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:16:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:16:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:59.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:16:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:59.771 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:16:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:59.772 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:16:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:16:59.773 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:16:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:59] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Jan 23 10:16:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:16:59] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Jan 23 10:17:00 compute-0 podman[255642]: 2026-01-23 10:17:00.127508867 +0000 UTC m=+0.295628755 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.134 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Instance f385572a-ade5-4da0-b6d8-d6bb5cdc919e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.135 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.136 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.176 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:17:00 compute-0 podman[255642]: 2026-01-23 10:17:00.274761543 +0000 UTC m=+0.442881331 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Jan 23 10:17:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Jan 23 10:17:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:17:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117145556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.663 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.670 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.706 249233 ERROR nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] [req-2c0398b6-3b08-45bd-9a13-586ac81b8568] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID a1f82a16-d7e7-4500-99d7-a20de995d9a2.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-2c0398b6-3b08-45bd-9a13-586ac81b8568"}]}
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.734 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing inventories for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.764 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating ProviderTree inventory for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.765 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:17:00 compute-0 podman[255784]: 2026-01-23 10:17:00.767938529 +0000 UTC m=+0.059094046 container exec 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:17:00 compute-0 podman[255784]: 2026-01-23 10:17:00.782024049 +0000 UTC m=+0.073179536 container exec_died 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.788 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing aggregate associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.827 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing trait associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 10:17:00 compute-0 nova_compute[249229]: 2026-01-23 10:17:00.887 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:17:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:01 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:01 compute-0 podman[255893]: 2026-01-23 10:17:01.097716852 +0000 UTC m=+0.058467939 container exec 0cf8d60bfb72762bf9544d1dbb65a80fa5e606e4b7e050f91cd95cf3caeee354 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:17:01 compute-0 podman[255893]: 2026-01-23 10:17:01.115772414 +0000 UTC m=+0.076523501 container exec_died 0cf8d60bfb72762bf9544d1dbb65a80fa5e606e4b7e050f91cd95cf3caeee354 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:17:01 compute-0 nova_compute[249229]: 2026-01-23 10:17:01.135 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:01 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:01.134 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:17:01 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:01.135 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:17:01 compute-0 nova_compute[249229]: 2026-01-23 10:17:01.173 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:01 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:01 compute-0 podman[255958]: 2026-01-23 10:17:01.319393269 +0000 UTC m=+0.053075007 container exec 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 10:17:01 compute-0 podman[255958]: 2026-01-23 10:17:01.328464216 +0000 UTC m=+0.062145924 container exec_died 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 10:17:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:17:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419588943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:01 compute-0 nova_compute[249229]: 2026-01-23 10:17:01.352 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:17:01 compute-0 nova_compute[249229]: 2026-01-23 10:17:01.357 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:17:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:01 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:01 compute-0 nova_compute[249229]: 2026-01-23 10:17:01.399 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updated inventory for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 23 10:17:01 compute-0 nova_compute[249229]: 2026-01-23 10:17:01.399 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 23 10:17:01 compute-0 nova_compute[249229]: 2026-01-23 10:17:01.399 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:17:01 compute-0 nova_compute[249229]: 2026-01-23 10:17:01.428 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:17:01 compute-0 nova_compute[249229]: 2026-01-23 10:17:01.428 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:17:01 compute-0 podman[256027]: 2026-01-23 10:17:01.535758915 +0000 UTC m=+0.052166401 container exec 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, io.openshift.tags=Ceph keepalived, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., name=keepalived, io.buildah.version=1.28.2, description=keepalived for Ceph, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 23 10:17:01 compute-0 podman[256027]: 2026-01-23 10:17:01.548808595 +0000 UTC m=+0.065216061 container exec_died 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, architecture=x86_64, distribution-scope=public, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc.)
Jan 23 10:17:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:17:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:01.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:17:01 compute-0 podman[256091]: 2026-01-23 10:17:01.739682508 +0000 UTC m=+0.051460960 container exec a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:17:01 compute-0 podman[256091]: 2026-01-23 10:17:01.778747356 +0000 UTC m=+0.090525808 container exec_died a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:17:01 compute-0 ceph-mon[74335]: pgmap v711: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Jan 23 10:17:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1666916957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:01 compute-0 podman[256165]: 2026-01-23 10:17:01.992707734 +0000 UTC m=+0.058300924 container exec 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 10:17:02 compute-0 nova_compute[249229]: 2026-01-23 10:17:02.098 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:02 compute-0 podman[256165]: 2026-01-23 10:17:02.202897635 +0000 UTC m=+0.268490815 container exec_died 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 10:17:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:17:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:02.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:17:02 compute-0 nova_compute[249229]: 2026-01-23 10:17:02.412 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:02 compute-0 nova_compute[249229]: 2026-01-23 10:17:02.413 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:02 compute-0 nova_compute[249229]: 2026-01-23 10:17:02.413 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:17:02 compute-0 nova_compute[249229]: 2026-01-23 10:17:02.414 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:17:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 23 10:17:02 compute-0 nova_compute[249229]: 2026-01-23 10:17:02.571 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:17:02 compute-0 nova_compute[249229]: 2026-01-23 10:17:02.571 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquired lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:17:02 compute-0 nova_compute[249229]: 2026-01-23 10:17:02.572 249233 DEBUG nova.network.neutron [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 10:17:02 compute-0 nova_compute[249229]: 2026-01-23 10:17:02.572 249233 DEBUG nova.objects.instance [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f385572a-ade5-4da0-b6d8-d6bb5cdc919e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:17:02 compute-0 podman[256281]: 2026-01-23 10:17:02.587823332 +0000 UTC m=+0.063516303 container exec 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:17:02 compute-0 podman[256281]: 2026-01-23 10:17:02.651824797 +0000 UTC m=+0.127517778 container exec_died 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:17:02 compute-0 sudo[255543]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:17:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:17:02 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:02 compute-0 sudo[256325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:17:02 compute-0 sudo[256325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:02 compute-0 sudo[256325]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:02 compute-0 sudo[256350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:17:02 compute-0 sudo[256350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:03 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:03 compute-0 ceph-mon[74335]: pgmap v712: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Jan 23 10:17:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1117145556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3419588943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:03 compute-0 ceph-mon[74335]: pgmap v713: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 23 10:17:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2232532070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:03 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:03 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:03 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:03.137 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:17:03 compute-0 sudo[256391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:17:03 compute-0 sudo[256391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:03 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:03 compute-0 sudo[256391]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:03 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:03 compute-0 sudo[256350]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:17:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:17:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:17:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:17:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:17:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:03.620Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:17:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:03.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:17:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:17:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:17:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:17:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:17:03 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:17:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:17:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:17:03 compute-0 sudo[256433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:17:03 compute-0 sudo[256433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:03 compute-0 sudo[256433]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:03 compute-0 sudo[256459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:17:03 compute-0 sudo[256459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:04 compute-0 podman[256524]: 2026-01-23 10:17:04.300296079 +0000 UTC m=+0.040511270 container create f364d7ca268a73b632215ede86f2bc1a32e41e6c4b022aa7cd34bc1a6ec50f95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:17:04 compute-0 systemd[1]: Started libpod-conmon-f364d7ca268a73b632215ede86f2bc1a32e41e6c4b022aa7cd34bc1a6ec50f95.scope.
Jan 23 10:17:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:17:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2443746606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:17:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:17:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:17:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:17:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:04.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:04 compute-0 podman[256524]: 2026-01-23 10:17:04.282579436 +0000 UTC m=+0.022794647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:17:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:17:04 compute-0 podman[256524]: 2026-01-23 10:17:04.395861378 +0000 UTC m=+0.136076589 container init f364d7ca268a73b632215ede86f2bc1a32e41e6c4b022aa7cd34bc1a6ec50f95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:17:04 compute-0 podman[256524]: 2026-01-23 10:17:04.401421106 +0000 UTC m=+0.141636297 container start f364d7ca268a73b632215ede86f2bc1a32e41e6c4b022aa7cd34bc1a6ec50f95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:17:04 compute-0 podman[256524]: 2026-01-23 10:17:04.404223385 +0000 UTC m=+0.144438596 container attach f364d7ca268a73b632215ede86f2bc1a32e41e6c4b022aa7cd34bc1a6ec50f95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:17:04 compute-0 wizardly_snyder[256541]: 167 167
Jan 23 10:17:04 compute-0 podman[256524]: 2026-01-23 10:17:04.407172099 +0000 UTC m=+0.147387290 container died f364d7ca268a73b632215ede86f2bc1a32e41e6c4b022aa7cd34bc1a6ec50f95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:17:04 compute-0 systemd[1]: libpod-f364d7ca268a73b632215ede86f2bc1a32e41e6c4b022aa7cd34bc1a6ec50f95.scope: Deactivated successfully.
Jan 23 10:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7abd47c5c512350d50b70f6e644326e63e2481aa277149e41e477e547491792-merged.mount: Deactivated successfully.
Jan 23 10:17:04 compute-0 podman[256524]: 2026-01-23 10:17:04.451176257 +0000 UTC m=+0.191391448 container remove f364d7ca268a73b632215ede86f2bc1a32e41e6c4b022aa7cd34bc1a6ec50f95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:17:04 compute-0 systemd[1]: libpod-conmon-f364d7ca268a73b632215ede86f2bc1a32e41e6c4b022aa7cd34bc1a6ec50f95.scope: Deactivated successfully.
Jan 23 10:17:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:17:04 compute-0 podman[256565]: 2026-01-23 10:17:04.620107258 +0000 UTC m=+0.042686072 container create d74ac977f0ae46deab8912648e353417d4994ee9e9d0bd5a9773cf55ba797988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Jan 23 10:17:04 compute-0 systemd[1]: Started libpod-conmon-d74ac977f0ae46deab8912648e353417d4994ee9e9d0bd5a9773cf55ba797988.scope.
Jan 23 10:17:04 compute-0 podman[256565]: 2026-01-23 10:17:04.602074856 +0000 UTC m=+0.024653690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:17:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01741375e41e76ac10f59bed412ccc5602aa7973391b99d47641dca39f1371b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01741375e41e76ac10f59bed412ccc5602aa7973391b99d47641dca39f1371b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01741375e41e76ac10f59bed412ccc5602aa7973391b99d47641dca39f1371b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01741375e41e76ac10f59bed412ccc5602aa7973391b99d47641dca39f1371b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01741375e41e76ac10f59bed412ccc5602aa7973391b99d47641dca39f1371b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:04 compute-0 podman[256565]: 2026-01-23 10:17:04.739041419 +0000 UTC m=+0.161620263 container init d74ac977f0ae46deab8912648e353417d4994ee9e9d0bd5a9773cf55ba797988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_zhukovsky, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:17:04 compute-0 podman[256565]: 2026-01-23 10:17:04.748331252 +0000 UTC m=+0.170910066 container start d74ac977f0ae46deab8912648e353417d4994ee9e9d0bd5a9773cf55ba797988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_zhukovsky, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:17:04 compute-0 podman[256565]: 2026-01-23 10:17:04.752447049 +0000 UTC m=+0.175025893 container attach d74ac977f0ae46deab8912648e353417d4994ee9e9d0bd5a9773cf55ba797988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_zhukovsky, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:17:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:17:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:17:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:05 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:05 compute-0 stupefied_zhukovsky[256581]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:17:05 compute-0 stupefied_zhukovsky[256581]: --> All data devices are unavailable
Jan 23 10:17:05 compute-0 systemd[1]: libpod-d74ac977f0ae46deab8912648e353417d4994ee9e9d0bd5a9773cf55ba797988.scope: Deactivated successfully.
Jan 23 10:17:05 compute-0 podman[256565]: 2026-01-23 10:17:05.143291895 +0000 UTC m=+0.565870749 container died d74ac977f0ae46deab8912648e353417d4994ee9e9d0bd5a9773cf55ba797988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 10:17:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-01741375e41e76ac10f59bed412ccc5602aa7973391b99d47641dca39f1371b5-merged.mount: Deactivated successfully.
Jan 23 10:17:05 compute-0 podman[256565]: 2026-01-23 10:17:05.193597512 +0000 UTC m=+0.616176336 container remove d74ac977f0ae46deab8912648e353417d4994ee9e9d0bd5a9773cf55ba797988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_zhukovsky, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:17:05 compute-0 systemd[1]: libpod-conmon-d74ac977f0ae46deab8912648e353417d4994ee9e9d0bd5a9773cf55ba797988.scope: Deactivated successfully.
Jan 23 10:17:05 compute-0 nova_compute[249229]: 2026-01-23 10:17:05.224 249233 DEBUG nova.network.neutron [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updating instance_info_cache with network_info: [{"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:17:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:05 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:05 compute-0 sudo[256459]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:05 compute-0 sudo[256612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:17:05 compute-0 sudo[256612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:05 compute-0 sudo[256612]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:05 compute-0 sudo[256637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:17:05 compute-0 sudo[256637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:05 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1679498423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:05 compute-0 ceph-mon[74335]: pgmap v714: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:17:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:17:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/703034000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:05.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:05 compute-0 nova_compute[249229]: 2026-01-23 10:17:05.717 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Releasing lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:17:05 compute-0 nova_compute[249229]: 2026-01-23 10:17:05.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 10:17:05 compute-0 nova_compute[249229]: 2026-01-23 10:17:05.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:05 compute-0 nova_compute[249229]: 2026-01-23 10:17:05.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:05 compute-0 nova_compute[249229]: 2026-01-23 10:17:05.719 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:05 compute-0 nova_compute[249229]: 2026-01-23 10:17:05.719 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:05 compute-0 nova_compute[249229]: 2026-01-23 10:17:05.719 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:05 compute-0 nova_compute[249229]: 2026-01-23 10:17:05.719 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:17:05 compute-0 podman[256704]: 2026-01-23 10:17:05.755765135 +0000 UTC m=+0.040160520 container create 76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:17:05 compute-0 systemd[1]: Started libpod-conmon-76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d.scope.
Jan 23 10:17:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:17:05 compute-0 podman[256704]: 2026-01-23 10:17:05.738604368 +0000 UTC m=+0.022999793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:17:05 compute-0 podman[256704]: 2026-01-23 10:17:05.833694595 +0000 UTC m=+0.118090010 container init 76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:17:05 compute-0 podman[256704]: 2026-01-23 10:17:05.840121197 +0000 UTC m=+0.124516592 container start 76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:17:05 compute-0 podman[256704]: 2026-01-23 10:17:05.843514244 +0000 UTC m=+0.127909709 container attach 76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 23 10:17:05 compute-0 interesting_pike[256720]: 167 167
Jan 23 10:17:05 compute-0 systemd[1]: libpod-76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d.scope: Deactivated successfully.
Jan 23 10:17:05 compute-0 conmon[256720]: conmon 76beb74a7a52bad436e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d.scope/container/memory.events
Jan 23 10:17:05 compute-0 podman[256704]: 2026-01-23 10:17:05.847092545 +0000 UTC m=+0.131487950 container died 76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:17:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dd51808ee3c0f4444e46712680188af9507572f7edaed724caa42f49a6f80c8-merged.mount: Deactivated successfully.
Jan 23 10:17:05 compute-0 podman[256704]: 2026-01-23 10:17:05.884736333 +0000 UTC m=+0.169131728 container remove 76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:17:05 compute-0 systemd[1]: libpod-conmon-76beb74a7a52bad436e49e5f1d849b08fd66dd56038cd8dceccd6b8c63a0f36d.scope: Deactivated successfully.
Jan 23 10:17:06 compute-0 nova_compute[249229]: 2026-01-23 10:17:06.013 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:06 compute-0 podman[256744]: 2026-01-23 10:17:06.102148619 +0000 UTC m=+0.044927525 container create 8dd40b98b5328c447898f76be729b8c055b177cdc3fe1521bcbdf5e84d8e4bf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:17:06 compute-0 systemd[1]: Started libpod-conmon-8dd40b98b5328c447898f76be729b8c055b177cdc3fe1521bcbdf5e84d8e4bf5.scope.
Jan 23 10:17:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:17:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd68dbeebb924e48f2a51dcf49c239ef9546d0ed95524ec281c45f49b91d2c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd68dbeebb924e48f2a51dcf49c239ef9546d0ed95524ec281c45f49b91d2c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd68dbeebb924e48f2a51dcf49c239ef9546d0ed95524ec281c45f49b91d2c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd68dbeebb924e48f2a51dcf49c239ef9546d0ed95524ec281c45f49b91d2c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:06 compute-0 podman[256744]: 2026-01-23 10:17:06.081331978 +0000 UTC m=+0.024110914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:17:06 compute-0 nova_compute[249229]: 2026-01-23 10:17:06.178 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:06 compute-0 podman[256744]: 2026-01-23 10:17:06.180642045 +0000 UTC m=+0.123420981 container init 8dd40b98b5328c447898f76be729b8c055b177cdc3fe1521bcbdf5e84d8e4bf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:17:06 compute-0 podman[256744]: 2026-01-23 10:17:06.190040811 +0000 UTC m=+0.132819727 container start 8dd40b98b5328c447898f76be729b8c055b177cdc3fe1521bcbdf5e84d8e4bf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mendeleev, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:17:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:06 compute-0 podman[256744]: 2026-01-23 10:17:06.194334113 +0000 UTC m=+0.137113019 container attach 8dd40b98b5328c447898f76be729b8c055b177cdc3fe1521bcbdf5e84d8e4bf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 23 10:17:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:17:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]: {
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:     "1": [
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:         {
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "devices": [
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "/dev/loop3"
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             ],
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "lv_name": "ceph_lv0",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "lv_size": "21470642176",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "name": "ceph_lv0",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "tags": {
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.cluster_name": "ceph",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.crush_device_class": "",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.encrypted": "0",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.osd_id": "1",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.type": "block",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.vdo": "0",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:                 "ceph.with_tpm": "0"
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             },
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "type": "block",
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:             "vg_name": "ceph_vg0"
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:         }
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]:     ]
Jan 23 10:17:06 compute-0 crazy_mendeleev[256761]: }
Jan 23 10:17:06 compute-0 systemd[1]: libpod-8dd40b98b5328c447898f76be729b8c055b177cdc3fe1521bcbdf5e84d8e4bf5.scope: Deactivated successfully.
Jan 23 10:17:06 compute-0 podman[256744]: 2026-01-23 10:17:06.503746618 +0000 UTC m=+0.446525554 container died 8dd40b98b5328c447898f76be729b8c055b177cdc3fe1521bcbdf5e84d8e4bf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 10:17:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:17:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbd68dbeebb924e48f2a51dcf49c239ef9546d0ed95524ec281c45f49b91d2c3-merged.mount: Deactivated successfully.
Jan 23 10:17:06 compute-0 podman[256744]: 2026-01-23 10:17:06.608301958 +0000 UTC m=+0.551080864 container remove 8dd40b98b5328c447898f76be729b8c055b177cdc3fe1521bcbdf5e84d8e4bf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mendeleev, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:17:06 compute-0 systemd[1]: libpod-conmon-8dd40b98b5328c447898f76be729b8c055b177cdc3fe1521bcbdf5e84d8e4bf5.scope: Deactivated successfully.
Jan 23 10:17:06 compute-0 sudo[256637]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:06 compute-0 sudo[256783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:17:06 compute-0 sudo[256783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:06 compute-0 sudo[256783]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:06 compute-0 sudo[256808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:17:06 compute-0 sudo[256808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:07 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:07.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:07 compute-0 nova_compute[249229]: 2026-01-23 10:17:07.101 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:07 compute-0 podman[256873]: 2026-01-23 10:17:07.164422185 +0000 UTC m=+0.053755115 container create 2e2d5007544c4f0d28290c8264817be7a343fd7828a244b3e220a1b9ab7bc766 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:17:07 compute-0 systemd[1]: Started libpod-conmon-2e2d5007544c4f0d28290c8264817be7a343fd7828a244b3e220a1b9ab7bc766.scope.
Jan 23 10:17:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:17:07 compute-0 podman[256873]: 2026-01-23 10:17:07.134696622 +0000 UTC m=+0.024029572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:17:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:07 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:07 compute-0 podman[256873]: 2026-01-23 10:17:07.259415609 +0000 UTC m=+0.148748569 container init 2e2d5007544c4f0d28290c8264817be7a343fd7828a244b3e220a1b9ab7bc766 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:17:07 compute-0 podman[256873]: 2026-01-23 10:17:07.266327975 +0000 UTC m=+0.155660905 container start 2e2d5007544c4f0d28290c8264817be7a343fd7828a244b3e220a1b9ab7bc766 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 23 10:17:07 compute-0 agitated_volhard[256889]: 167 167
Jan 23 10:17:07 compute-0 systemd[1]: libpod-2e2d5007544c4f0d28290c8264817be7a343fd7828a244b3e220a1b9ab7bc766.scope: Deactivated successfully.
Jan 23 10:17:07 compute-0 podman[256873]: 2026-01-23 10:17:07.309383256 +0000 UTC m=+0.198716186 container attach 2e2d5007544c4f0d28290c8264817be7a343fd7828a244b3e220a1b9ab7bc766 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:17:07 compute-0 podman[256873]: 2026-01-23 10:17:07.309828179 +0000 UTC m=+0.199161109 container died 2e2d5007544c4f0d28290c8264817be7a343fd7828a244b3e220a1b9ab7bc766 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:17:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:07 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-942d55cd492fb8ef76b7d15deb9e3ee162412cb10b575191936e69b51c89e7a8-merged.mount: Deactivated successfully.
Jan 23 10:17:07 compute-0 podman[256873]: 2026-01-23 10:17:07.446229227 +0000 UTC m=+0.335562157 container remove 2e2d5007544c4f0d28290c8264817be7a343fd7828a244b3e220a1b9ab7bc766 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:17:07 compute-0 systemd[1]: libpod-conmon-2e2d5007544c4f0d28290c8264817be7a343fd7828a244b3e220a1b9ab7bc766.scope: Deactivated successfully.
Jan 23 10:17:07 compute-0 ceph-mon[74335]: pgmap v715: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:17:07 compute-0 podman[256916]: 2026-01-23 10:17:07.61062774 +0000 UTC m=+0.021858921 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:17:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:07 compute-0 podman[256916]: 2026-01-23 10:17:07.716984496 +0000 UTC m=+0.128215657 container create 9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Jan 23 10:17:07 compute-0 systemd[1]: Started libpod-conmon-9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be.scope.
Jan 23 10:17:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365bcd67d1c8892ddd591d8f4f570912aa6159fd3ea9b91071f8c7d667648688/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365bcd67d1c8892ddd591d8f4f570912aa6159fd3ea9b91071f8c7d667648688/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365bcd67d1c8892ddd591d8f4f570912aa6159fd3ea9b91071f8c7d667648688/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365bcd67d1c8892ddd591d8f4f570912aa6159fd3ea9b91071f8c7d667648688/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:17:07 compute-0 podman[256916]: 2026-01-23 10:17:07.910996427 +0000 UTC m=+0.322227588 container init 9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Jan 23 10:17:07 compute-0 podman[256916]: 2026-01-23 10:17:07.919800087 +0000 UTC m=+0.331031238 container start 9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_babbage, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:17:07 compute-0 podman[256916]: 2026-01-23 10:17:07.958242527 +0000 UTC m=+0.369473688 container attach 9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 23 10:17:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:08.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:17:08 compute-0 lvm[257009]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:17:08 compute-0 lvm[257009]: VG ceph_vg0 finished
Jan 23 10:17:08 compute-0 intelligent_babbage[256934]: {}
Jan 23 10:17:08 compute-0 systemd[1]: libpod-9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be.scope: Deactivated successfully.
Jan 23 10:17:08 compute-0 systemd[1]: libpod-9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be.scope: Consumed 1.138s CPU time.
Jan 23 10:17:08 compute-0 podman[257013]: 2026-01-23 10:17:08.733401731 +0000 UTC m=+0.022606342 container died 9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_babbage, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:17:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-365bcd67d1c8892ddd591d8f4f570912aa6159fd3ea9b91071f8c7d667648688-merged.mount: Deactivated successfully.
Jan 23 10:17:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1232319574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:08 compute-0 podman[257013]: 2026-01-23 10:17:08.972496722 +0000 UTC m=+0.261701313 container remove 9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:17:08 compute-0 systemd[1]: libpod-conmon-9cd15b70e7178e370a285f5bd7a516a068f4c8a6b6c337dfdeab418e891b37be.scope: Deactivated successfully.
Jan 23 10:17:09 compute-0 sudo[256808]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:17:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:09 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:17:09 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:09 compute-0 sudo[257028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:17:09 compute-0 sudo[257028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:09 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:09 compute-0 sudo[257028]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:09 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:09.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:09] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Jan 23 10:17:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:09] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Jan 23 10:17:09 compute-0 ceph-mon[74335]: pgmap v716: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:17:09 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:17:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:10.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:17:10 compute-0 podman[257054]: 2026-01-23 10:17:10.553297594 +0000 UTC m=+0.084696643 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 10:17:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:11 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:11 compute-0 ceph-mon[74335]: pgmap v717: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:17:11 compute-0 nova_compute[249229]: 2026-01-23 10:17:11.180 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:11 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:11 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:11.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:12 compute-0 nova_compute[249229]: 2026-01-23 10:17:12.103 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:12.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 132 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 399 KiB/s wr, 21 op/s
Jan 23 10:17:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:13 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec0020a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:13 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:13 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:13.621Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:13 compute-0 ceph-mon[74335]: pgmap v718: 353 pgs: 353 active+clean; 132 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 399 KiB/s wr, 21 op/s
Jan 23 10:17:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:13.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 132 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 396 KiB/s wr, 21 op/s
Jan 23 10:17:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:15 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:15 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec002240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:15 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:15 compute-0 ceph-mon[74335]: pgmap v719: 353 pgs: 353 active+clean; 132 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 396 KiB/s wr, 21 op/s
Jan 23 10:17:16 compute-0 nova_compute[249229]: 2026-01-23 10:17:16.183 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:17:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:17:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v720: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 10:17:17 compute-0 ceph-mon[74335]: pgmap v720: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 10:17:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:17 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:17.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:17 compute-0 nova_compute[249229]: 2026-01-23 10:17:17.104 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:17 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:17 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec0023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:17:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:17:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v721: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 10:17:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:19 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:19 compute-0 podman[257090]: 2026-01-23 10:17:19.517196491 +0000 UTC m=+0.049402962 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:17:19 compute-0 ceph-mon[74335]: pgmap v721: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 10:17:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:19] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Jan 23 10:17:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:19] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Jan 23 10:17:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:17:19
Jan 23 10:17:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:17:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:17:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'backups', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data', 'images']
Jan 23 10:17:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:17:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:17:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:17:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011055880461678714 of space, bias 1.0, pg target 0.3316764138503614 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:17:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 10:17:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:17:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2560479743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:17:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3266007736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:17:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:21 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec0095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:21 compute-0 nova_compute[249229]: 2026-01-23 10:17:21.224 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:21 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:21 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:21.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:22 compute-0 nova_compute[249229]: 2026-01-23 10:17:22.106 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:22 compute-0 ceph-mon[74335]: pgmap v722: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 10:17:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:17:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:17:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 10:17:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:23 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:23 compute-0 ceph-mon[74335]: pgmap v723: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 10:17:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:23 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec0095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:23 compute-0 sudo[257115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:17:23 compute-0 sudo[257115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:23 compute-0 sudo[257115]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:23 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:23.622Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:23.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:24.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Jan 23 10:17:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:25 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:25 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:25 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec00a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:25.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:26 compute-0 ceph-mon[74335]: pgmap v724: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Jan 23 10:17:26 compute-0 nova_compute[249229]: 2026-01-23 10:17:26.228 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:17:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:26.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:17:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Jan 23 10:17:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:27 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:27.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:27 compute-0 nova_compute[249229]: 2026-01-23 10:17:27.109 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:27 compute-0 ceph-mon[74335]: pgmap v725: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Jan 23 10:17:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:27 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:27 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:27.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:28.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:17:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:29 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec00a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:29 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:29 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:29.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:29 compute-0 ceph-mon[74335]: pgmap v726: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:17:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:29] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Jan 23 10:17:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:29] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Jan 23 10:17:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:30.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:17:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:31 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:31 compute-0 nova_compute[249229]: 2026-01-23 10:17:31.230 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:31 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec00a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:31 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:31 compute-0 ceph-mon[74335]: pgmap v727: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:17:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:31.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:32 compute-0 nova_compute[249229]: 2026-01-23 10:17:32.112 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:32 compute-0 nova_compute[249229]: 2026-01-23 10:17:32.115 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:17:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:32.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:17:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 75 op/s
Jan 23 10:17:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:33 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:33 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc002a50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:33 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec00a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:33.624Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:33 compute-0 ceph-mon[74335]: pgmap v728: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 75 op/s
Jan 23 10:17:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:17:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:33.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:17:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:34.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 75 op/s
Jan 23 10:17:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:17:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:17:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:35 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:35 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:35 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:17:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:35.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:17:35 compute-0 ceph-mon[74335]: pgmap v729: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 75 op/s
Jan 23 10:17:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:17:36 compute-0 rsyslogd[1003]: imjournal: 6688 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 23 10:17:36 compute-0 nova_compute[249229]: 2026-01-23 10:17:36.234 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:36.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 75 op/s
Jan 23 10:17:36 compute-0 ceph-mon[74335]: pgmap v730: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 75 op/s
Jan 23 10:17:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:37 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec00a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:37.102Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:17:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:37.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:17:37 compute-0 nova_compute[249229]: 2026-01-23 10:17:37.115 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:37 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:37 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 75 op/s
Jan 23 10:17:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:39 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:39 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec00a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:39 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:39 compute-0 ceph-mon[74335]: pgmap v731: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 75 op/s
Jan 23 10:17:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:39.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:39] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Jan 23 10:17:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:39] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Jan 23 10:17:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:17:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:40.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:17:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 75 op/s
Jan 23 10:17:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:41 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:41 compute-0 nova_compute[249229]: 2026-01-23 10:17:41.236 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:41 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:41 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:41 compute-0 podman[257163]: 2026-01-23 10:17:41.553596434 +0000 UTC m=+0.088635704 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 10:17:41 compute-0 ceph-mon[74335]: pgmap v732: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 75 op/s
Jan 23 10:17:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:41.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:17:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5405 writes, 24K keys, 5401 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 5405 writes, 5401 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1499 writes, 6423 keys, 1499 commit groups, 1.0 writes per commit group, ingest: 11.15 MB, 0.02 MB/s
                                           Interval WAL: 1499 writes, 1499 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.4      0.73              0.32        13    0.056       0      0       0.0       0.0
                                             L6      1/0   12.53 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   4.1     66.6     57.6      2.58              0.44        12    0.215     62K   6298       0.0       0.0
                                            Sum      1/0   12.53 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.1     52.0     55.8      3.31              0.76        25    0.132     62K   6298       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.3     93.0     94.7      0.78              0.21        10    0.078     28K   2620       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     66.6     57.6      2.58              0.44        12    0.215     62K   6298       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     49.6      0.72              0.32        12    0.060       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.035, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.10 MB/s write, 0.17 GB read, 0.10 MB/s read, 3.3 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5569ddb77350#2 capacity: 304.00 MB usage: 11.12 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000136 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(601,10.62 MB,3.49341%) FilterBlock(26,183.98 KB,0.0591027%) IndexBlock(26,328.84 KB,0.105637%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 10:17:42 compute-0 nova_compute[249229]: 2026-01-23 10:17:42.118 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:42.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 197 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 23 10:17:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:43 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:43 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:43 compute-0 sudo[257191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:17:43 compute-0 sudo[257191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:17:43 compute-0 sudo[257191]: pam_unix(sudo:session): session closed for user root
Jan 23 10:17:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:43 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:43.625Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:43 compute-0 ceph-mon[74335]: pgmap v733: 353 pgs: 353 active+clean; 197 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 23 10:17:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:44.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 197 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 23 10:17:45 compute-0 ceph-mon[74335]: pgmap v734: 353 pgs: 353 active+clean; 197 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 23 10:17:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:45 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:45 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:45 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:17:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:45.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:17:46 compute-0 nova_compute[249229]: 2026-01-23 10:17:46.239 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:46.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:17:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:47.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:17:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:47.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:17:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:47.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:17:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:47 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:47 compute-0 nova_compute[249229]: 2026-01-23 10:17:47.120 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:47 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:47 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:47 compute-0 ceph-mon[74335]: pgmap v735: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:17:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:17:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:48.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:17:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:17:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:17:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688632625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:17:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:17:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688632625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:17:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3688632625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:17:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3688632625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:17:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:49 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:49 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:49 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:49.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:49 compute-0 ceph-mon[74335]: pgmap v736: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:17:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:49] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Jan 23 10:17:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:49] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Jan 23 10:17:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:17:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:17:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:17:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:17:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:17:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:17:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:17:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:17:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:50.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:50 compute-0 podman[257223]: 2026-01-23 10:17:50.52432094 +0000 UTC m=+0.051501111 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 10:17:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:17:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:17:50 compute-0 ceph-mon[74335]: pgmap v737: 353 pgs: 353 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:17:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:51 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:51 compute-0 nova_compute[249229]: 2026-01-23 10:17:51.243 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:51 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:51 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:51.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:51 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1982629231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:52 compute-0 nova_compute[249229]: 2026-01-23 10:17:52.122 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:17:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:52.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:17:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 23 10:17:53 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1712693474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:17:53 compute-0 ceph-mon[74335]: pgmap v738: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 23 10:17:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:53 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:53 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:53 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:53.626Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:53.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:54.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 18 KiB/s wr, 33 op/s
Jan 23 10:17:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:55 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:55 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:55 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:55 compute-0 ceph-mon[74335]: pgmap v739: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 18 KiB/s wr, 33 op/s
Jan 23 10:17:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:56 compute-0 nova_compute[249229]: 2026-01-23 10:17:56.245 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:17:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:56.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:17:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:17:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 19 KiB/s wr, 33 op/s
Jan 23 10:17:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:17:57.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:17:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:57 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:57 compute-0 nova_compute[249229]: 2026-01-23 10:17:57.125 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:57 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:57 compute-0 ceph-mon[74335]: pgmap v740: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 19 KiB/s wr, 33 op/s
Jan 23 10:17:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:57 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:17:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:57.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:17:57 compute-0 ovn_controller[151634]: 2026-01-23T10:17:57Z|00034|binding|INFO|Releasing lport a5ec1982-fcf7-420c-bc38-1abd9fc4085a from this chassis (sb_readonly=0)
Jan 23 10:17:57 compute-0 nova_compute[249229]: 2026-01-23 10:17:57.797 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:17:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:58.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:17:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:17:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:59 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.191 249233 DEBUG nova.compute.manager [req-e30e8c42-e1cb-4312-9d4c-6bc599525eab req-f0affadb-7987-4005-bff7-3b9fea3b4805 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received event network-changed-7db1962f-3a42-428d-955f-aaac0cf186c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.192 249233 DEBUG nova.compute.manager [req-e30e8c42-e1cb-4312-9d4c-6bc599525eab req-f0affadb-7987-4005-bff7-3b9fea3b4805 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Refreshing instance network info cache due to event network-changed-7db1962f-3a42-428d-955f-aaac0cf186c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.192 249233 DEBUG oslo_concurrency.lockutils [req-e30e8c42-e1cb-4312-9d4c-6bc599525eab req-f0affadb-7987-4005-bff7-3b9fea3b4805 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.192 249233 DEBUG oslo_concurrency.lockutils [req-e30e8c42-e1cb-4312-9d4c-6bc599525eab req-f0affadb-7987-4005-bff7-3b9fea3b4805 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.192 249233 DEBUG nova.network.neutron [req-e30e8c42-e1cb-4312-9d4c-6bc599525eab req-f0affadb-7987-4005-bff7-3b9fea3b4805 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Refreshing network info cache for port 7db1962f-3a42-428d-955f-aaac0cf186c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.251 249233 DEBUG oslo_concurrency.lockutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.251 249233 DEBUG oslo_concurrency.lockutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.252 249233 DEBUG oslo_concurrency.lockutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.252 249233 DEBUG oslo_concurrency.lockutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.252 249233 DEBUG oslo_concurrency.lockutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.255 249233 INFO nova.compute.manager [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Terminating instance
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.256 249233 DEBUG nova.compute.manager [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 10:17:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:59 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:59 compute-0 kernel: tap7db1962f-3a (unregistering): left promiscuous mode
Jan 23 10:17:59 compute-0 NetworkManager[48866]: <info>  [1769163479.4036] device (tap7db1962f-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.410 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 ovn_controller[151634]: 2026-01-23T10:17:59Z|00035|binding|INFO|Releasing lport 7db1962f-3a42-428d-955f-aaac0cf186c5 from this chassis (sb_readonly=0)
Jan 23 10:17:59 compute-0 ovn_controller[151634]: 2026-01-23T10:17:59Z|00036|binding|INFO|Setting lport 7db1962f-3a42-428d-955f-aaac0cf186c5 down in Southbound
Jan 23 10:17:59 compute-0 ovn_controller[151634]: 2026-01-23T10:17:59Z|00037|binding|INFO|Removing iface tap7db1962f-3a ovn-installed in OVS
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.413 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.420 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:89:66 10.100.0.14'], port_security=['fa:16:3e:45:89:66 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f385572a-ade5-4da0-b6d8-d6bb5cdc919e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'neutron:revision_number': '4', 'neutron:security_group_ids': '22a38a63-e659-46d6-a24c-d4af0f15baaf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ae92df3-ded7-43fb-bc56-81665ce8e357, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], logical_port=7db1962f-3a42-428d-955f-aaac0cf186c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.423 161921 INFO neutron.agent.ovn.metadata.agent [-] Port 7db1962f-3a42-428d-955f-aaac0cf186c5 in datapath 4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3 unbound from our chassis
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.424 161921 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.428 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d632bd-cbbf-49fe-aee9-f525337de7ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.429 161921 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3 namespace which is not needed anymore
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.433 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:17:59 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4003dd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:17:59 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 23 10:17:59 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 17.321s CPU time.
Jan 23 10:17:59 compute-0 systemd-machined[216411]: Machine qemu-1-instance-00000001 terminated.
Jan 23 10:17:59 compute-0 neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3[255374]: [NOTICE]   (255378) : haproxy version is 2.8.14-c23fe91
Jan 23 10:17:59 compute-0 neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3[255374]: [NOTICE]   (255378) : path to executable is /usr/sbin/haproxy
Jan 23 10:17:59 compute-0 neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3[255374]: [WARNING]  (255378) : Exiting Master process...
Jan 23 10:17:59 compute-0 neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3[255374]: [ALERT]    (255378) : Current worker (255380) exited with code 143 (Terminated)
Jan 23 10:17:59 compute-0 neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3[255374]: [WARNING]  (255378) : All workers exited. Exiting... (0)
Jan 23 10:17:59 compute-0 systemd[1]: libpod-836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27.scope: Deactivated successfully.
Jan 23 10:17:59 compute-0 podman[257279]: 2026-01-23 10:17:59.57841973 +0000 UTC m=+0.047622870 container died 836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 10:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27-userdata-shm.mount: Deactivated successfully.
Jan 23 10:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d486e17e0f42eb5bb3e615861903d7e136a404c852ac295efd4d0efa0c32c7ae-merged.mount: Deactivated successfully.
Jan 23 10:17:59 compute-0 podman[257279]: 2026-01-23 10:17:59.614546469 +0000 UTC m=+0.083749609 container cleanup 836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 10:17:59 compute-0 ceph-mon[74335]: pgmap v741: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:17:59 compute-0 systemd[1]: libpod-conmon-836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27.scope: Deactivated successfully.
Jan 23 10:17:59 compute-0 podman[257308]: 2026-01-23 10:17:59.677115995 +0000 UTC m=+0.040039993 container remove 836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.678 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.682 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[ad587f49-85ce-4084-bc88-0c5f3d879767]: (4, ('Fri Jan 23 10:17:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3 (836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27)\n836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27\nFri Jan 23 10:17:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3 (836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27)\n836e15165ae84a36a6e0d3b8e28f667cbc7edea61fce60a0a64da5499b02da27\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.683 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.683 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[95d011f2-acd8-4b06-9f9f-9b4b46b541e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.684 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f4a5a80-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:17:59 compute-0 kernel: tap4f4a5a80-80: left promiscuous mode
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.686 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.693 249233 INFO nova.virt.libvirt.driver [-] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Instance destroyed successfully.
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.693 249233 DEBUG nova.objects.instance [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'resources' on Instance uuid f385572a-ade5-4da0-b6d8-d6bb5cdc919e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.703 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.706 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[eca454ca-ce0e-47bb-9a92-99e6be1a269a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.710 249233 DEBUG nova.virt.libvirt.vif [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-896397354',display_name='tempest-TestNetworkBasicOps-server-896397354',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-896397354',id=1,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBdrrN909tRIL4+NsC0YFSFTi3EuZsB2pesGmAyVsHAWGns8IyxroukgzCqNJ0STgim697i6oxgop6PVFjv6RyikBB+iN2/4f4D0fD1li8fNUXFCCnib2uuGD3w4Sjam9Q==',key_name='tempest-TestNetworkBasicOps-526025578',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:16:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-0bqn5yko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:16:29Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=f385572a-ade5-4da0-b6d8-d6bb5cdc919e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.710 249233 DEBUG nova.network.os_vif_util [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.711 249233 DEBUG nova.network.os_vif_util [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:89:66,bridge_name='br-int',has_traffic_filtering=True,id=7db1962f-3a42-428d-955f-aaac0cf186c5,network=Network(4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7db1962f-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.711 249233 DEBUG os_vif [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:89:66,bridge_name='br-int',has_traffic_filtering=True,id=7db1962f-3a42-428d-955f-aaac0cf186c5,network=Network(4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7db1962f-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.713 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.713 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7db1962f-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.715 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.715 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.716 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.717 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[77d29f21-fe14-428a-9be9-41c229518e1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.718 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.719 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[8446f7af-45f6-4a8f-91e1-748c570d3b3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.722 249233 INFO os_vif [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:89:66,bridge_name='br-int',has_traffic_filtering=True,id=7db1962f-3a42-428d-955f-aaac0cf186c5,network=Network(4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7db1962f-3a')
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.735 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2adfb8-5d89-4d48-8a36-d1e903fb7582]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451401, 'reachable_time': 40550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257338, 'error': None, 'target': 'ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.745 249233 DEBUG nova.compute.manager [req-cbf05b97-419a-4a53-b803-52852f6424f0 req-645aa99e-004e-4a41-9c08-df5f34cd7b01 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received event network-vif-unplugged-7db1962f-3a42-428d-955f-aaac0cf186c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.746 249233 DEBUG oslo_concurrency.lockutils [req-cbf05b97-419a-4a53-b803-52852f6424f0 req-645aa99e-004e-4a41-9c08-df5f34cd7b01 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.746 249233 DEBUG oslo_concurrency.lockutils [req-cbf05b97-419a-4a53-b803-52852f6424f0 req-645aa99e-004e-4a41-9c08-df5f34cd7b01 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.747 249233 DEBUG oslo_concurrency.lockutils [req-cbf05b97-419a-4a53-b803-52852f6424f0 req-645aa99e-004e-4a41-9c08-df5f34cd7b01 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.747 249233 DEBUG nova.compute.manager [req-cbf05b97-419a-4a53-b803-52852f6424f0 req-645aa99e-004e-4a41-9c08-df5f34cd7b01 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] No waiting events found dispatching network-vif-unplugged-7db1962f-3a42-428d-955f-aaac0cf186c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.747 249233 DEBUG nova.compute.manager [req-cbf05b97-419a-4a53-b803-52852f6424f0 req-645aa99e-004e-4a41-9c08-df5f34cd7b01 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received event network-vif-unplugged-7db1962f-3a42-428d-955f-aaac0cf186c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 23 10:17:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d4f4a5a80\x2d880b\x2d45e6\x2dab84\x2de7c26dc2b3e3.mount: Deactivated successfully.
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.752 162436 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.753 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[63316276-9220-4ffa-b855-c2bc63cc9512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:17:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:17:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:17:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.771 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.772 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:17:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:17:59.772 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.776 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.777 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.777 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.777 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:17:59 compute-0 nova_compute[249229]: 2026-01-23 10:17:59.778 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:17:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:59] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Jan 23 10:17:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:17:59] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.184 249233 INFO nova.virt.libvirt.driver [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Deleting instance files /var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e_del
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.186 249233 INFO nova.virt.libvirt.driver [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Deletion of /var/lib/nova/instances/f385572a-ade5-4da0-b6d8-d6bb5cdc919e_del complete
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.238 249233 DEBUG nova.virt.libvirt.host [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.240 249233 INFO nova.virt.libvirt.host [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] UEFI support detected
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.243 249233 INFO nova.compute.manager [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Took 0.99 seconds to destroy the instance on the hypervisor.
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.244 249233 DEBUG oslo.service.loopingcall [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.245 249233 DEBUG nova.compute.manager [-] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.245 249233 DEBUG nova.network.neutron [-] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 10:18:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:18:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2675935538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.270 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:18:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:00.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.492 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.494 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4607MB free_disk=59.94265365600586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.495 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.495 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:18:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.590 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Instance f385572a-ade5-4da0-b6d8-d6bb5cdc919e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.590 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.591 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:18:00 compute-0 nova_compute[249229]: 2026-01-23 10:18:00.643 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:18:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2675935538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.016 249233 DEBUG nova.network.neutron [req-e30e8c42-e1cb-4312-9d4c-6bc599525eab req-f0affadb-7987-4005-bff7-3b9fea3b4805 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updated VIF entry in instance network info cache for port 7db1962f-3a42-428d-955f-aaac0cf186c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.017 249233 DEBUG nova.network.neutron [req-e30e8c42-e1cb-4312-9d4c-6bc599525eab req-f0affadb-7987-4005-bff7-3b9fea3b4805 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updating instance_info_cache with network_info: [{"id": "7db1962f-3a42-428d-955f-aaac0cf186c5", "address": "fa:16:3e:45:89:66", "network": {"id": "4f4a5a80-880b-45e6-ab84-e7c26dc2b3e3", "bridge": "br-int", "label": "tempest-network-smoke--1837896573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7db1962f-3a", "ovs_interfaceid": "7db1962f-3a42-428d-955f-aaac0cf186c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.037 249233 DEBUG nova.network.neutron [-] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.039 249233 DEBUG oslo_concurrency.lockutils [req-e30e8c42-e1cb-4312-9d4c-6bc599525eab req-f0affadb-7987-4005-bff7-3b9fea3b4805 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-f385572a-ade5-4da0-b6d8-d6bb5cdc919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.050 249233 INFO nova.compute.manager [-] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Took 0.81 seconds to deallocate network for instance.
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.093 249233 DEBUG oslo_concurrency.lockutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:18:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:18:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2538524120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:18:01 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.133 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.137 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.160 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.206 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.206 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.207 249233 DEBUG oslo_concurrency.lockutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:18:01 compute-0 kernel: ganesha.nfsd[255288]: segfault at 50 ip 00007fb4749b532e sp 00007fb3f17f9210 error 4 in libntirpc.so.5.8[7fb47499a000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 23 10:18:01 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 10:18:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[254781]: 23/01/2026 10:18:01 : epoch 69734a73 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003c10 fd 42 proxy ignored for local
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.261 249233 DEBUG oslo_concurrency.processutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:18:01 compute-0 systemd[1]: Started Process Core Dump (PID 257406/UID 0).
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.321 249233 DEBUG nova.compute.manager [req-60900a73-54e0-4d7c-bef7-f55f07d1f8f5 req-f1945b8f-79ec-4262-850a-9a2bf770999e 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received event network-vif-deleted-7db1962f-3a42-428d-955f-aaac0cf186c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:18:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:01 compute-0 ceph-mon[74335]: pgmap v742: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:18:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2538524120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:18:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1665070625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.740 249233 DEBUG oslo_concurrency.processutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.745 249233 DEBUG nova.compute.provider_tree [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:18:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.765 249233 DEBUG nova.scheduler.client.report [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.793 249233 DEBUG oslo_concurrency.lockutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.826 249233 INFO nova.scheduler.client.report [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Deleted allocations for instance f385572a-ade5-4da0-b6d8-d6bb5cdc919e
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.843 249233 DEBUG nova.compute.manager [req-5ede4ba7-6474-47c2-a9f9-7bfa4c4f93ce req-838a700c-280f-4f3e-8740-0b9ecc95e249 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received event network-vif-plugged-7db1962f-3a42-428d-955f-aaac0cf186c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.843 249233 DEBUG oslo_concurrency.lockutils [req-5ede4ba7-6474-47c2-a9f9-7bfa4c4f93ce req-838a700c-280f-4f3e-8740-0b9ecc95e249 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.844 249233 DEBUG oslo_concurrency.lockutils [req-5ede4ba7-6474-47c2-a9f9-7bfa4c4f93ce req-838a700c-280f-4f3e-8740-0b9ecc95e249 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.844 249233 DEBUG oslo_concurrency.lockutils [req-5ede4ba7-6474-47c2-a9f9-7bfa4c4f93ce req-838a700c-280f-4f3e-8740-0b9ecc95e249 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.844 249233 DEBUG nova.compute.manager [req-5ede4ba7-6474-47c2-a9f9-7bfa4c4f93ce req-838a700c-280f-4f3e-8740-0b9ecc95e249 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] No waiting events found dispatching network-vif-plugged-7db1962f-3a42-428d-955f-aaac0cf186c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.844 249233 WARNING nova.compute.manager [req-5ede4ba7-6474-47c2-a9f9-7bfa4c4f93ce req-838a700c-280f-4f3e-8740-0b9ecc95e249 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Received unexpected event network-vif-plugged-7db1962f-3a42-428d-955f-aaac0cf186c5 for instance with vm_state deleted and task_state None.
Jan 23 10:18:01 compute-0 nova_compute[249229]: 2026-01-23 10:18:01.930 249233 DEBUG oslo_concurrency.lockutils [None req-2c5932d7-4b13-421f-82f0-b5df9b1db183 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "f385572a-ade5-4da0-b6d8-d6bb5cdc919e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.127 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.208 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.208 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.208 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.227 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.227 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.228 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:18:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:02.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:18:02 compute-0 nova_compute[249229]: 2026-01-23 10:18:02.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:18:02 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1665070625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:02 compute-0 systemd-coredump[257408]: Process 254785 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 46:
                                                    #0  0x00007fb4749b532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 10:18:02 compute-0 systemd[1]: systemd-coredump@8-257406-0.service: Deactivated successfully.
Jan 23 10:18:02 compute-0 systemd[1]: systemd-coredump@8-257406-0.service: Consumed 1.477s CPU time.
Jan 23 10:18:03 compute-0 podman[257436]: 2026-01-23 10:18:03.0224979 +0000 UTC m=+0.024735264 container died 0cf8d60bfb72762bf9544d1dbb65a80fa5e606e4b7e050f91cd95cf3caeee354 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ae16c0efd8c7a12acb5f3d6cd51f2397e2ef12abc34ee31c35e3751c2d8679a-merged.mount: Deactivated successfully.
Jan 23 10:18:03 compute-0 podman[257436]: 2026-01-23 10:18:03.056247475 +0000 UTC m=+0.058484809 container remove 0cf8d60bfb72762bf9544d1dbb65a80fa5e606e4b7e050f91cd95cf3caeee354 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:18:03 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 10:18:03 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:18:03 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.803s CPU time.
Jan 23 10:18:03 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:18:03.420 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:18:03 compute-0 nova_compute[249229]: 2026-01-23 10:18:03.419 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:03 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:18:03.422 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:18:03 compute-0 sudo[257481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:18:03 compute-0 sudo[257481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:03 compute-0 sudo[257481]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:03.628Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:18:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:03 compute-0 ceph-mon[74335]: pgmap v743: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Jan 23 10:18:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2045104684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:04.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 23 10:18:04 compute-0 nova_compute[249229]: 2026-01-23 10:18:04.715 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:04 compute-0 nova_compute[249229]: 2026-01-23 10:18:04.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:18:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3410514855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:04 compute-0 ceph-mon[74335]: pgmap v744: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 23 10:18:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:18:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:18:05 compute-0 nova_compute[249229]: 2026-01-23 10:18:05.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:18:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:18:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4166887387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:06.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 23 10:18:06 compute-0 nova_compute[249229]: 2026-01-23 10:18:06.762 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:06 compute-0 nova_compute[249229]: 2026-01-23 10:18:06.846 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:06 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4138145315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:06 compute-0 ceph-mon[74335]: pgmap v745: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 23 10:18:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:07.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:18:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:07.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:18:07 compute-0 nova_compute[249229]: 2026-01-23 10:18:07.128 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101807 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:18:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:18:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:07.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:18:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:08.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:18:09 compute-0 sudo[257514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:18:09 compute-0 sudo[257514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:09 compute-0 sudo[257514]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:09 compute-0 sudo[257539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:18:09 compute-0 sudo[257539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:09 compute-0 ceph-mon[74335]: pgmap v746: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:18:09 compute-0 nova_compute[249229]: 2026-01-23 10:18:09.716 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:09.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:09] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Jan 23 10:18:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:09] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Jan 23 10:18:10 compute-0 sudo[257539]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:18:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:18:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:18:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:18:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:18:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:18:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:18:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:18:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:18:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:18:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:18:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:18:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:18:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:18:10 compute-0 sudo[257595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:18:10 compute-0 sudo[257595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:10 compute-0 sudo[257595]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:10 compute-0 sudo[257620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:18:10 compute-0 sudo[257620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:10 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:18:10.423 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:18:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:18:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:18:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:18:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:18:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:18:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:18:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:18:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:18:10 compute-0 podman[257688]: 2026-01-23 10:18:10.698600994 +0000 UTC m=+0.045399711 container create a721243335346527f920696d2dbed7ead813cd095e728c6bd860c28f7c7996c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_zhukovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:18:10 compute-0 systemd[1]: Started libpod-conmon-a721243335346527f920696d2dbed7ead813cd095e728c6bd860c28f7c7996c3.scope.
Jan 23 10:18:10 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:18:10 compute-0 podman[257688]: 2026-01-23 10:18:10.678938339 +0000 UTC m=+0.025737086 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:18:10 compute-0 podman[257688]: 2026-01-23 10:18:10.781580157 +0000 UTC m=+0.128378894 container init a721243335346527f920696d2dbed7ead813cd095e728c6bd860c28f7c7996c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 10:18:10 compute-0 podman[257688]: 2026-01-23 10:18:10.788496834 +0000 UTC m=+0.135295551 container start a721243335346527f920696d2dbed7ead813cd095e728c6bd860c28f7c7996c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 10:18:10 compute-0 podman[257688]: 2026-01-23 10:18:10.791527488 +0000 UTC m=+0.138326205 container attach a721243335346527f920696d2dbed7ead813cd095e728c6bd860c28f7c7996c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_zhukovsky, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:18:10 compute-0 pensive_zhukovsky[257705]: 167 167
Jan 23 10:18:10 compute-0 systemd[1]: libpod-a721243335346527f920696d2dbed7ead813cd095e728c6bd860c28f7c7996c3.scope: Deactivated successfully.
Jan 23 10:18:10 compute-0 podman[257688]: 2026-01-23 10:18:10.796011509 +0000 UTC m=+0.142810236 container died a721243335346527f920696d2dbed7ead813cd095e728c6bd860c28f7c7996c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 10:18:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-afd38dfef0ec1c50f2453a41d95d35f7efe5affd19ac4f68b49f80eab75b2cef-merged.mount: Deactivated successfully.
Jan 23 10:18:10 compute-0 podman[257688]: 2026-01-23 10:18:10.838647161 +0000 UTC m=+0.185445878 container remove a721243335346527f920696d2dbed7ead813cd095e728c6bd860c28f7c7996c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:18:10 compute-0 systemd[1]: libpod-conmon-a721243335346527f920696d2dbed7ead813cd095e728c6bd860c28f7c7996c3.scope: Deactivated successfully.
Jan 23 10:18:10 compute-0 podman[257726]: 2026-01-23 10:18:10.982008483 +0000 UTC m=+0.038737652 container create a585b0e17d652113c6c240c1d301802eda286b625ec9505609734a4a0955c1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_neumann, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 10:18:11 compute-0 systemd[1]: Started libpod-conmon-a585b0e17d652113c6c240c1d301802eda286b625ec9505609734a4a0955c1ad.scope.
Jan 23 10:18:11 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde0d8a8c09c43b80c40c4407a15b3ac07353cfd73637109a2d422dad72db0f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde0d8a8c09c43b80c40c4407a15b3ac07353cfd73637109a2d422dad72db0f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde0d8a8c09c43b80c40c4407a15b3ac07353cfd73637109a2d422dad72db0f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde0d8a8c09c43b80c40c4407a15b3ac07353cfd73637109a2d422dad72db0f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde0d8a8c09c43b80c40c4407a15b3ac07353cfd73637109a2d422dad72db0f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:11 compute-0 podman[257726]: 2026-01-23 10:18:11.05901384 +0000 UTC m=+0.115743029 container init a585b0e17d652113c6c240c1d301802eda286b625ec9505609734a4a0955c1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 10:18:11 compute-0 podman[257726]: 2026-01-23 10:18:10.965865478 +0000 UTC m=+0.022594667 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:18:11 compute-0 podman[257726]: 2026-01-23 10:18:11.067394742 +0000 UTC m=+0.124123911 container start a585b0e17d652113c6c240c1d301802eda286b625ec9505609734a4a0955c1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 10:18:11 compute-0 podman[257726]: 2026-01-23 10:18:11.070467698 +0000 UTC m=+0.127196867 container attach a585b0e17d652113c6c240c1d301802eda286b625ec9505609734a4a0955c1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:18:11 compute-0 jovial_neumann[257742]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:18:11 compute-0 jovial_neumann[257742]: --> All data devices are unavailable
Jan 23 10:18:11 compute-0 systemd[1]: libpod-a585b0e17d652113c6c240c1d301802eda286b625ec9505609734a4a0955c1ad.scope: Deactivated successfully.
Jan 23 10:18:11 compute-0 podman[257726]: 2026-01-23 10:18:11.415909897 +0000 UTC m=+0.472639086 container died a585b0e17d652113c6c240c1d301802eda286b625ec9505609734a4a0955c1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_neumann, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 10:18:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:11.637188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163491637392, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1099, "num_deletes": 251, "total_data_size": 1897018, "memory_usage": 1919536, "flush_reason": "Manual Compaction"}
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 23 10:18:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:11.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163491795125, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1850362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23812, "largest_seqno": 24910, "table_properties": {"data_size": 1845137, "index_size": 2685, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11658, "raw_average_key_size": 19, "raw_value_size": 1834496, "raw_average_value_size": 3135, "num_data_blocks": 120, "num_entries": 585, "num_filter_entries": 585, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163397, "oldest_key_time": 1769163397, "file_creation_time": 1769163491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 157996 microseconds, and 5100 cpu microseconds.
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:11.795186) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1850362 bytes OK
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:11.795210) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:11.804967) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:11.805016) EVENT_LOG_v1 {"time_micros": 1769163491805006, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:11.805039) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1892014, prev total WAL file size 1892985, number of live WAL files 2.
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:11.805860) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1806KB)], [53(12MB)]
Jan 23 10:18:11 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163491805948, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14992981, "oldest_snapshot_seqno": -1}
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5449 keys, 12824233 bytes, temperature: kUnknown
Jan 23 10:18:12 compute-0 ceph-mon[74335]: pgmap v747: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163492050206, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12824233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12787779, "index_size": 21752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 139897, "raw_average_key_size": 25, "raw_value_size": 12688750, "raw_average_value_size": 2328, "num_data_blocks": 884, "num_entries": 5449, "num_filter_entries": 5449, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769163491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:18:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bde0d8a8c09c43b80c40c4407a15b3ac07353cfd73637109a2d422dad72db0f4-merged.mount: Deactivated successfully.
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:12.050584) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12824233 bytes
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:12.107830) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.3 rd, 52.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.5 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(15.0) write-amplify(6.9) OK, records in: 5965, records dropped: 516 output_compression: NoCompression
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:12.107866) EVENT_LOG_v1 {"time_micros": 1769163492107852, "job": 28, "event": "compaction_finished", "compaction_time_micros": 244425, "compaction_time_cpu_micros": 41354, "output_level": 6, "num_output_files": 1, "total_output_size": 12824233, "num_input_records": 5965, "num_output_records": 5449, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:11.805719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:12.108182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:12.108186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:12.108188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:12.108190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:18:12 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:18:12.108191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:18:12 compute-0 podman[257726]: 2026-01-23 10:18:12.119426408 +0000 UTC m=+1.176155577 container remove a585b0e17d652113c6c240c1d301802eda286b625ec9505609734a4a0955c1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 10:18:12 compute-0 nova_compute[249229]: 2026-01-23 10:18:12.131 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:12 compute-0 sudo[257620]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:12 compute-0 systemd[1]: libpod-conmon-a585b0e17d652113c6c240c1d301802eda286b625ec9505609734a4a0955c1ad.scope: Deactivated successfully.
Jan 23 10:18:12 compute-0 sudo[257779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:18:12 compute-0 sudo[257779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:12 compute-0 sudo[257779]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:12 compute-0 sudo[257814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:18:12 compute-0 podman[257768]: 2026-01-23 10:18:12.289438883 +0000 UTC m=+0.438002043 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 10:18:12 compute-0 sudo[257814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:12.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:18:12 compute-0 podman[257887]: 2026-01-23 10:18:12.733667818 +0000 UTC m=+0.093272786 container create 6d20eeea690bf279c9cf1bee48836fa409a1ea718656ae7fbd39646b87efc9c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Jan 23 10:18:12 compute-0 podman[257887]: 2026-01-23 10:18:12.665325382 +0000 UTC m=+0.024930370 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:18:12 compute-0 systemd[1]: Started libpod-conmon-6d20eeea690bf279c9cf1bee48836fa409a1ea718656ae7fbd39646b87efc9c2.scope.
Jan 23 10:18:12 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:18:12 compute-0 podman[257887]: 2026-01-23 10:18:12.945546952 +0000 UTC m=+0.305152000 container init 6d20eeea690bf279c9cf1bee48836fa409a1ea718656ae7fbd39646b87efc9c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 10:18:12 compute-0 podman[257887]: 2026-01-23 10:18:12.952879671 +0000 UTC m=+0.312484629 container start 6d20eeea690bf279c9cf1bee48836fa409a1ea718656ae7fbd39646b87efc9c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Jan 23 10:18:12 compute-0 podman[257887]: 2026-01-23 10:18:12.959927091 +0000 UTC m=+0.319532099 container attach 6d20eeea690bf279c9cf1bee48836fa409a1ea718656ae7fbd39646b87efc9c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_feynman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 10:18:12 compute-0 elated_feynman[257904]: 167 167
Jan 23 10:18:12 compute-0 systemd[1]: libpod-6d20eeea690bf279c9cf1bee48836fa409a1ea718656ae7fbd39646b87efc9c2.scope: Deactivated successfully.
Jan 23 10:18:12 compute-0 podman[257887]: 2026-01-23 10:18:12.961097198 +0000 UTC m=+0.320702156 container died 6d20eeea690bf279c9cf1bee48836fa409a1ea718656ae7fbd39646b87efc9c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_feynman, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:18:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f0d24420db5322a45fefd7b506d4a40cff11e95f45a67f2e2921907ea9041f8-merged.mount: Deactivated successfully.
Jan 23 10:18:13 compute-0 podman[257887]: 2026-01-23 10:18:13.009090868 +0000 UTC m=+0.368695826 container remove 6d20eeea690bf279c9cf1bee48836fa409a1ea718656ae7fbd39646b87efc9c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_feynman, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Jan 23 10:18:13 compute-0 systemd[1]: libpod-conmon-6d20eeea690bf279c9cf1bee48836fa409a1ea718656ae7fbd39646b87efc9c2.scope: Deactivated successfully.
Jan 23 10:18:13 compute-0 ceph-mon[74335]: pgmap v748: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:18:13 compute-0 podman[257928]: 2026-01-23 10:18:13.171046051 +0000 UTC m=+0.042638854 container create 895dc39f8c3cebd85414b60437956d5407f9590a9f27f045c99d251adf88f175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 10:18:13 compute-0 systemd[1]: Started libpod-conmon-895dc39f8c3cebd85414b60437956d5407f9590a9f27f045c99d251adf88f175.scope.
Jan 23 10:18:13 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 9.
Jan 23 10:18:13 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:18:13 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.803s CPU time.
Jan 23 10:18:13 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 10:18:13 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea72a0f64de8680f641cef0313e5cdcb62df3a141cc38151d524e40a6c1feb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea72a0f64de8680f641cef0313e5cdcb62df3a141cc38151d524e40a6c1feb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea72a0f64de8680f641cef0313e5cdcb62df3a141cc38151d524e40a6c1feb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea72a0f64de8680f641cef0313e5cdcb62df3a141cc38151d524e40a6c1feb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:13 compute-0 podman[257928]: 2026-01-23 10:18:13.234435653 +0000 UTC m=+0.106028486 container init 895dc39f8c3cebd85414b60437956d5407f9590a9f27f045c99d251adf88f175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:18:13 compute-0 podman[257928]: 2026-01-23 10:18:13.243451094 +0000 UTC m=+0.115043897 container start 895dc39f8c3cebd85414b60437956d5407f9590a9f27f045c99d251adf88f175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:18:13 compute-0 podman[257928]: 2026-01-23 10:18:13.152282824 +0000 UTC m=+0.023875647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:18:13 compute-0 podman[257928]: 2026-01-23 10:18:13.246998125 +0000 UTC m=+0.118590958 container attach 895dc39f8c3cebd85414b60437956d5407f9590a9f27f045c99d251adf88f175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:18:13 compute-0 podman[257996]: 2026-01-23 10:18:13.410158046 +0000 UTC m=+0.037267636 container create c87cefca188c1caa50d4668462db3da5576faebe60401a0a328ee7e09b5e2ec6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4fa79ca8fed02c03341cf74fd3d4a8d632a5298e2f4c2ca9c770460c4cf0e1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4fa79ca8fed02c03341cf74fd3d4a8d632a5298e2f4c2ca9c770460c4cf0e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4fa79ca8fed02c03341cf74fd3d4a8d632a5298e2f4c2ca9c770460c4cf0e1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4fa79ca8fed02c03341cf74fd3d4a8d632a5298e2f4c2ca9c770460c4cf0e1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:13 compute-0 podman[257996]: 2026-01-23 10:18:13.480716921 +0000 UTC m=+0.107826541 container init c87cefca188c1caa50d4668462db3da5576faebe60401a0a328ee7e09b5e2ec6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 10:18:13 compute-0 podman[257996]: 2026-01-23 10:18:13.486728979 +0000 UTC m=+0.113838569 container start c87cefca188c1caa50d4668462db3da5576faebe60401a0a328ee7e09b5e2ec6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:18:13 compute-0 bash[257996]: c87cefca188c1caa50d4668462db3da5576faebe60401a0a328ee7e09b5e2ec6
Jan 23 10:18:13 compute-0 podman[257996]: 2026-01-23 10:18:13.395193888 +0000 UTC m=+0.022303488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:18:13 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:18:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:13 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 10:18:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:13 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 10:18:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:13 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 10:18:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:13 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 10:18:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:13 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 10:18:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:13 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 10:18:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:13 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 10:18:13 compute-0 tender_haslett[257944]: {
Jan 23 10:18:13 compute-0 tender_haslett[257944]:     "1": [
Jan 23 10:18:13 compute-0 tender_haslett[257944]:         {
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "devices": [
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "/dev/loop3"
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             ],
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "lv_name": "ceph_lv0",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "lv_size": "21470642176",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "name": "ceph_lv0",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "tags": {
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.cluster_name": "ceph",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.crush_device_class": "",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.encrypted": "0",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.osd_id": "1",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.type": "block",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.vdo": "0",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:                 "ceph.with_tpm": "0"
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             },
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "type": "block",
Jan 23 10:18:13 compute-0 tender_haslett[257944]:             "vg_name": "ceph_vg0"
Jan 23 10:18:13 compute-0 tender_haslett[257944]:         }
Jan 23 10:18:13 compute-0 tender_haslett[257944]:     ]
Jan 23 10:18:13 compute-0 tender_haslett[257944]: }
Jan 23 10:18:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:13 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:18:13 compute-0 systemd[1]: libpod-895dc39f8c3cebd85414b60437956d5407f9590a9f27f045c99d251adf88f175.scope: Deactivated successfully.
Jan 23 10:18:13 compute-0 podman[257928]: 2026-01-23 10:18:13.586778007 +0000 UTC m=+0.458370820 container died 895dc39f8c3cebd85414b60437956d5407f9590a9f27f045c99d251adf88f175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 10:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fea72a0f64de8680f641cef0313e5cdcb62df3a141cc38151d524e40a6c1feb6-merged.mount: Deactivated successfully.
Jan 23 10:18:13 compute-0 podman[257928]: 2026-01-23 10:18:13.629264605 +0000 UTC m=+0.500857408 container remove 895dc39f8c3cebd85414b60437956d5407f9590a9f27f045c99d251adf88f175 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:18:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:13.629Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:18:13 compute-0 systemd[1]: libpod-conmon-895dc39f8c3cebd85414b60437956d5407f9590a9f27f045c99d251adf88f175.scope: Deactivated successfully.
Jan 23 10:18:13 compute-0 sudo[257814]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:13 compute-0 sudo[258071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:18:13 compute-0 sudo[258071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:13 compute-0 sudo[258071]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:13 compute-0 sudo[258096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:18:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:13.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:13 compute-0 sudo[258096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:14 compute-0 podman[258162]: 2026-01-23 10:18:14.17868407 +0000 UTC m=+0.028829762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:18:14 compute-0 podman[258162]: 2026-01-23 10:18:14.324967622 +0000 UTC m=+0.175113284 container create 1342c12e45d61892906a06badd5334c446d05c056cd64ea55d55d2c659db0302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_satoshi, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:18:14 compute-0 systemd[1]: Started libpod-conmon-1342c12e45d61892906a06badd5334c446d05c056cd64ea55d55d2c659db0302.scope.
Jan 23 10:18:14 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:18:14 compute-0 podman[258162]: 2026-01-23 10:18:14.395513348 +0000 UTC m=+0.245659000 container init 1342c12e45d61892906a06badd5334c446d05c056cd64ea55d55d2c659db0302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_satoshi, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:18:14 compute-0 podman[258162]: 2026-01-23 10:18:14.401906468 +0000 UTC m=+0.252052130 container start 1342c12e45d61892906a06badd5334c446d05c056cd64ea55d55d2c659db0302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_satoshi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:18:14 compute-0 podman[258162]: 2026-01-23 10:18:14.405164419 +0000 UTC m=+0.255310101 container attach 1342c12e45d61892906a06badd5334c446d05c056cd64ea55d55d2c659db0302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 10:18:14 compute-0 confident_satoshi[258178]: 167 167
Jan 23 10:18:14 compute-0 systemd[1]: libpod-1342c12e45d61892906a06badd5334c446d05c056cd64ea55d55d2c659db0302.scope: Deactivated successfully.
Jan 23 10:18:14 compute-0 podman[258162]: 2026-01-23 10:18:14.406611595 +0000 UTC m=+0.256757267 container died 1342c12e45d61892906a06badd5334c446d05c056cd64ea55d55d2c659db0302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 10:18:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-882259f2c277d41a16349f2e74318afc3ed23d430aa6b218f17064c6c3bddce4-merged.mount: Deactivated successfully.
Jan 23 10:18:14 compute-0 podman[258162]: 2026-01-23 10:18:14.441592598 +0000 UTC m=+0.291738260 container remove 1342c12e45d61892906a06badd5334c446d05c056cd64ea55d55d2c659db0302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_satoshi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:18:14 compute-0 systemd[1]: libpod-conmon-1342c12e45d61892906a06badd5334c446d05c056cd64ea55d55d2c659db0302.scope: Deactivated successfully.
Jan 23 10:18:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:14.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:18:14 compute-0 nova_compute[249229]: 2026-01-23 10:18:14.692 249233 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163479.690431, f385572a-ade5-4da0-b6d8-d6bb5cdc919e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:18:14 compute-0 nova_compute[249229]: 2026-01-23 10:18:14.693 249233 INFO nova.compute.manager [-] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] VM Stopped (Lifecycle Event)
Jan 23 10:18:14 compute-0 podman[258200]: 2026-01-23 10:18:14.607464773 +0000 UTC m=+0.024481786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:18:14 compute-0 nova_compute[249229]: 2026-01-23 10:18:14.718 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:14 compute-0 podman[258200]: 2026-01-23 10:18:14.957660661 +0000 UTC m=+0.374677654 container create 932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:18:14 compute-0 nova_compute[249229]: 2026-01-23 10:18:14.975 249233 DEBUG nova.compute.manager [None req-ee41a833-fe20-4461-bd78-614b854e03db - - - - - -] [instance: f385572a-ade5-4da0-b6d8-d6bb5cdc919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:18:14 compute-0 ceph-mon[74335]: pgmap v749: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:18:14 compute-0 systemd[1]: Started libpod-conmon-932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787.scope.
Jan 23 10:18:15 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c8f8b27d7c543bf50597735dddceadf42f0fc4f26f5d10dfa3acbed3ac7f1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c8f8b27d7c543bf50597735dddceadf42f0fc4f26f5d10dfa3acbed3ac7f1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c8f8b27d7c543bf50597735dddceadf42f0fc4f26f5d10dfa3acbed3ac7f1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c8f8b27d7c543bf50597735dddceadf42f0fc4f26f5d10dfa3acbed3ac7f1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:15 compute-0 podman[258200]: 2026-01-23 10:18:15.050296207 +0000 UTC m=+0.467313200 container init 932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_haibt, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:18:15 compute-0 podman[258200]: 2026-01-23 10:18:15.056862382 +0000 UTC m=+0.473879385 container start 932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 10:18:15 compute-0 podman[258200]: 2026-01-23 10:18:15.060263408 +0000 UTC m=+0.477280401 container attach 932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 10:18:15 compute-0 lvm[258291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:18:15 compute-0 lvm[258291]: VG ceph_vg0 finished
Jan 23 10:18:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:15.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:15 compute-0 quirky_haibt[258217]: {}
Jan 23 10:18:15 compute-0 systemd[1]: libpod-932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787.scope: Deactivated successfully.
Jan 23 10:18:15 compute-0 systemd[1]: libpod-932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787.scope: Consumed 1.125s CPU time.
Jan 23 10:18:15 compute-0 podman[258200]: 2026-01-23 10:18:15.818631645 +0000 UTC m=+1.235648648 container died 932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c8f8b27d7c543bf50597735dddceadf42f0fc4f26f5d10dfa3acbed3ac7f1c-merged.mount: Deactivated successfully.
Jan 23 10:18:15 compute-0 podman[258200]: 2026-01-23 10:18:15.860849374 +0000 UTC m=+1.277866367 container remove 932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_haibt, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 10:18:15 compute-0 systemd[1]: libpod-conmon-932e61a249903922f815a623633a0a5f1199688e44501de2e3bad72b774a1787.scope: Deactivated successfully.
Jan 23 10:18:15 compute-0 sudo[258096]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:18:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:18:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:18:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:18:16 compute-0 sudo[258306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:18:16 compute-0 sudo[258306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:16 compute-0 sudo[258306]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:16.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:18:16 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:18:16 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:18:16 compute-0 ceph-mon[74335]: pgmap v750: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:18:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:17.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:18:17 compute-0 nova_compute[249229]: 2026-01-23 10:18:17.134 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:17.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:18.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:18:19 compute-0 ceph-mon[74335]: pgmap v751: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:18:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:19 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:18:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:19 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:18:19 compute-0 nova_compute[249229]: 2026-01-23 10:18:19.720 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:19.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:19] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Jan 23 10:18:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:19] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Jan 23 10:18:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:18:19
Jan 23 10:18:19 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:18:19 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:18:19 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'images', '.nfs', 'volumes', 'backups', 'default.rgw.log']
Jan 23 10:18:19 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:18:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:18:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:18:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:20.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:18:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:18:21 compute-0 podman[258336]: 2026-01-23 10:18:21.521481583 +0000 UTC m=+0.052655787 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 10:18:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:18:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:21.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:18:22 compute-0 ceph-mon[74335]: pgmap v752: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:18:22 compute-0 nova_compute[249229]: 2026-01-23 10:18:22.137 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:22.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:18:23 compute-0 ceph-mon[74335]: pgmap v753: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:18:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/453783286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:18:23 compute-0 sudo[258357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:18:23 compute-0 sudo[258357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:23 compute-0 sudo[258357]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:23.630Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:18:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:23.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:24.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:18:24 compute-0 nova_compute[249229]: 2026-01-23 10:18:24.721 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:25.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:25 compute-0 ceph-mon[74335]: pgmap v754: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:18:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:25 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 10:18:26 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:26 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:18:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 88 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 23 10:18:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:27.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:18:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:27.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:18:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:27.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:18:27 compute-0 nova_compute[249229]: 2026-01-23 10:18:27.141 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:27 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddc0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:27 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda8000da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:27 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:27 compute-0 ceph-mon[74335]: pgmap v755: 353 pgs: 353 active+clean; 88 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 23 10:18:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:18:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:27.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:18:28 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 23 10:18:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 88 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:18:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3880301074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:18:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1407883863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:18:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:29 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd94000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101829 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:18:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:29 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:29 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda8001ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:29 compute-0 nova_compute[249229]: 2026-01-23 10:18:29.722 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:29.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:29] "GET /metrics HTTP/1.1" 200 48524 "" "Prometheus/2.51.0"
Jan 23 10:18:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:29] "GET /metrics HTTP/1.1" 200 48524 "" "Prometheus/2.51.0"
Jan 23 10:18:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:30.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 88 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:18:30 compute-0 ceph-mon[74335]: pgmap v756: 353 pgs: 353 active+clean; 88 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:18:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:31 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:31 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:31 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb80025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:31 compute-0 ceph-mon[74335]: pgmap v757: 353 pgs: 353 active+clean; 88 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:18:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:31.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:32 compute-0 nova_compute[249229]: 2026-01-23 10:18:32.143 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:18:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:32.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:18:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 23 10:18:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:33 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda8001ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:33 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:33 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:33.631Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:18:33 compute-0 ceph-mon[74335]: pgmap v758: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 23 10:18:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:33.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:34.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 23 10:18:34 compute-0 nova_compute[249229]: 2026-01-23 10:18:34.723 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:18:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:18:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:35 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb80025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:35 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda80027e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:35 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:35 compute-0 ceph-mon[74335]: pgmap v759: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 23 10:18:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:18:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:35.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:36.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 23 10:18:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:37.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:18:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:37 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:37 compute-0 nova_compute[249229]: 2026-01-23 10:18:37.180 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 10:18:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:37 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb80032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:37 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda80027e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:37.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:38 compute-0 ceph-mon[74335]: pgmap v760: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 23 10:18:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:38.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 23 10:18:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:39 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:39 compute-0 ceph-mon[74335]: pgmap v761: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 23 10:18:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:39 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd94002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:39 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb80032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:39 compute-0 nova_compute[249229]: 2026-01-23 10:18:39.724 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:39.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:39] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Jan 23 10:18:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:39] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Jan 23 10:18:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:40.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 23 10:18:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:41 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda80027e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:18:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258012]: 23/01/2026 10:18:41 : epoch 69734ae5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c002b10 fd 39 proxy ignored for local
Jan 23 10:18:41 compute-0 kernel: ganesha.nfsd[258399]: segfault at 50 ip 00007fde48ab732e sp 00007fddacff8210 error 4 in libntirpc.so.5.8[7fde48a9c000+2c000] likely on CPU 4 (core 0, socket 4)
Jan 23 10:18:41 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 10:18:41 compute-0 systemd[1]: Started Process Core Dump (PID 258418/UID 0).
Jan 23 10:18:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:41.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:41 compute-0 ceph-mon[74335]: pgmap v762: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 23 10:18:42 compute-0 nova_compute[249229]: 2026-01-23 10:18:42.181 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 23 10:18:42 compute-0 podman[258421]: 2026-01-23 10:18:42.567334721 +0000 UTC m=+0.091789580 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 23 10:18:42 compute-0 systemd-coredump[258419]: Process 258018 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007fde48ab732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 10:18:43 compute-0 ceph-mon[74335]: pgmap v763: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 23 10:18:43 compute-0 systemd[1]: systemd-coredump@9-258418-0.service: Deactivated successfully.
Jan 23 10:18:43 compute-0 systemd[1]: systemd-coredump@9-258418-0.service: Consumed 1.437s CPU time.
Jan 23 10:18:43 compute-0 podman[258452]: 2026-01-23 10:18:43.111516542 +0000 UTC m=+0.019847911 container died c87cefca188c1caa50d4668462db3da5576faebe60401a0a328ee7e09b5e2ec6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 10:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a4fa79ca8fed02c03341cf74fd3d4a8d632a5298e2f4c2ca9c770460c4cf0e1-merged.mount: Deactivated successfully.
Jan 23 10:18:43 compute-0 podman[258452]: 2026-01-23 10:18:43.15047824 +0000 UTC m=+0.058809609 container remove c87cefca188c1caa50d4668462db3da5576faebe60401a0a328ee7e09b5e2ec6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 10:18:43 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 10:18:43 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:18:43 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.551s CPU time.
Jan 23 10:18:43 compute-0 ovn_controller[151634]: 2026-01-23T10:18:43Z|00038|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 23 10:18:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:43.632Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:18:43 compute-0 sudo[258495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:18:43 compute-0 sudo[258495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:18:43 compute-0 sudo[258495]: pam_unix(sudo:session): session closed for user root
Jan 23 10:18:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:43.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:44.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Jan 23 10:18:44 compute-0 nova_compute[249229]: 2026-01-23 10:18:44.726 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:45 compute-0 ceph-mon[74335]: pgmap v764: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Jan 23 10:18:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:45.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:46.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 23 10:18:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:18:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:18:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:47.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:18:47 compute-0 nova_compute[249229]: 2026-01-23 10:18:47.276 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101847 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:18:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:47.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:47 compute-0 ceph-mon[74335]: pgmap v765: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 23 10:18:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:48.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:18:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:18:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2862770348' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:18:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:18:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2862770348' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:18:48 compute-0 ceph-mon[74335]: pgmap v766: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:18:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2862770348' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:18:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2862770348' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:18:49 compute-0 nova_compute[249229]: 2026-01-23 10:18:49.727 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:49.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:49] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Jan 23 10:18:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:49] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Jan 23 10:18:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:18:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:18:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:18:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:18:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:18:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:18:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:18:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:18:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:18:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:50.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:18:51 compute-0 ceph-mon[74335]: pgmap v767: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:18:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:18:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:51.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:18:52 compute-0 nova_compute[249229]: 2026-01-23 10:18:52.278 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:52 compute-0 podman[258530]: 2026-01-23 10:18:52.55522314 +0000 UTC m=+0.049604292 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent)
Jan 23 10:18:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:18:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:52.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:53 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 10.
Jan 23 10:18:53 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:18:53 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.551s CPU time.
Jan 23 10:18:53 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 10:18:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:53.633Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:18:53 compute-0 podman[258601]: 2026-01-23 10:18:53.673935191 +0000 UTC m=+0.052003907 container create 12c3c919b5ab32a440341b9db41c42bf5cfc0858c3ee572f8f7ed1fe7702536b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 10:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dc14ed0f14d986ca5a2fa627839c4d7e75013f733629a0eed8b0d13ae7dec9/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dc14ed0f14d986ca5a2fa627839c4d7e75013f733629a0eed8b0d13ae7dec9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dc14ed0f14d986ca5a2fa627839c4d7e75013f733629a0eed8b0d13ae7dec9/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dc14ed0f14d986ca5a2fa627839c4d7e75013f733629a0eed8b0d13ae7dec9/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:18:53 compute-0 podman[258601]: 2026-01-23 10:18:53.745102376 +0000 UTC m=+0.123171112 container init 12c3c919b5ab32a440341b9db41c42bf5cfc0858c3ee572f8f7ed1fe7702536b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 10:18:53 compute-0 podman[258601]: 2026-01-23 10:18:53.651769358 +0000 UTC m=+0.029838104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:18:53 compute-0 podman[258601]: 2026-01-23 10:18:53.750487614 +0000 UTC m=+0.128556330 container start 12c3c919b5ab32a440341b9db41c42bf5cfc0858c3ee572f8f7ed1fe7702536b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:18:53 compute-0 bash[258601]: 12c3c919b5ab32a440341b9db41c42bf5cfc0858c3ee572f8f7ed1fe7702536b
Jan 23 10:18:53 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:18:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 10:18:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 10:18:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 10:18:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 10:18:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 10:18:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 10:18:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 10:18:53 compute-0 ceph-mon[74335]: pgmap v768: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:18:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000063s ======
Jan 23 10:18:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:53.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 23 10:18:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:18:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:18:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:54.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:54 compute-0 nova_compute[249229]: 2026-01-23 10:18:54.728 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:18:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:55.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:18:55 compute-0 ceph-mon[74335]: pgmap v769: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:18:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:18:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 10:18:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:18:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:56.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:18:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:18:57.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:18:57 compute-0 nova_compute[249229]: 2026-01-23 10:18:57.280 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:57.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:57 compute-0 ceph-mon[74335]: pgmap v770: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 10:18:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Jan 23 10:18:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:58.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:58 compute-0 ceph-mon[74335]: pgmap v771: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Jan 23 10:18:59 compute-0 nova_compute[249229]: 2026-01-23 10:18:59.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:18:59 compute-0 nova_compute[249229]: 2026-01-23 10:18:59.729 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:18:59 compute-0 nova_compute[249229]: 2026-01-23 10:18:59.758 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:18:59 compute-0 nova_compute[249229]: 2026-01-23 10:18:59.759 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:18:59 compute-0 nova_compute[249229]: 2026-01-23 10:18:59.759 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:18:59 compute-0 nova_compute[249229]: 2026-01-23 10:18:59.759 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:18:59 compute-0 nova_compute[249229]: 2026-01-23 10:18:59.760 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:18:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:18:59.773 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:18:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:18:59.774 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:18:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:18:59.774 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:18:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:18:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:18:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:59.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:18:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:59 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:18:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:18:59 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:18:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:59] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Jan 23 10:18:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:18:59] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Jan 23 10:19:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:19:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911174770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.235 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:19:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1911174770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.400 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.401 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4603MB free_disk=59.9427490234375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.401 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.402 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.486 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.486 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.506 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:19:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Jan 23 10:19:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:19:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:00.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:19:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:19:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756463740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.975 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:19:00 compute-0 nova_compute[249229]: 2026-01-23 10:19:00.981 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:19:01 compute-0 nova_compute[249229]: 2026-01-23 10:19:01.000 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:19:01 compute-0 nova_compute[249229]: 2026-01-23 10:19:01.024 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:19:01 compute-0 nova_compute[249229]: 2026-01-23 10:19:01.024 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:19:01 compute-0 ceph-mon[74335]: pgmap v772: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Jan 23 10:19:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1756463740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:19:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:01.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:19:02 compute-0 nova_compute[249229]: 2026-01-23 10:19:02.282 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 15 KiB/s wr, 4 op/s
Jan 23 10:19:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:02.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.024 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.040 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.040 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.040 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.091 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.092 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.093 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.093 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:19:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:03.634Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:03 compute-0 ceph-mon[74335]: pgmap v773: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 15 KiB/s wr, 4 op/s
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:19:03 compute-0 nova_compute[249229]: 2026-01-23 10:19:03.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:19:03 compute-0 sudo[258713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:19:03 compute-0 sudo[258713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:03 compute-0 sudo[258713]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:19:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:03.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:19:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 15 KiB/s wr, 4 op/s
Jan 23 10:19:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:04.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2031067823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:04 compute-0 nova_compute[249229]: 2026-01-23 10:19:04.708 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:19:04 compute-0 nova_compute[249229]: 2026-01-23 10:19:04.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:19:04 compute-0 nova_compute[249229]: 2026-01-23 10:19:04.766 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:19:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:19:05 compute-0 ceph-mon[74335]: pgmap v774: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 15 KiB/s wr, 4 op/s
Jan 23 10:19:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:19:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4045296728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:19:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:05.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 10:19:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:19:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 16 KiB/s wr, 5 op/s
Jan 23 10:19:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:06.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:06 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1043744357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:07.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:07 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:07 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:07 compute-0 nova_compute[249229]: 2026-01-23 10:19:07.332 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:07 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:07 compute-0 ceph-mon[74335]: pgmap v775: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 16 KiB/s wr, 5 op/s
Jan 23 10:19:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1377108216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:07 compute-0 nova_compute[249229]: 2026-01-23 10:19:07.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:19:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:19:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:19:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 3.9 KiB/s wr, 4 op/s
Jan 23 10:19:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:19:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:08.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:19:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3477460864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:09 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/101909 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:19:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:09 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:09 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:09 compute-0 nova_compute[249229]: 2026-01-23 10:19:09.769 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:19:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:09.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:19:09 compute-0 nova_compute[249229]: 2026-01-23 10:19:09.846 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:09 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:19:09.847 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:19:09 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:19:09.849 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:19:09 compute-0 ceph-mon[74335]: pgmap v776: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 3.9 KiB/s wr, 4 op/s
Jan 23 10:19:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:09] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:19:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:09] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:19:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 3.9 KiB/s wr, 4 op/s
Jan 23 10:19:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:10.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:11 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:11 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:11 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8001fc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:19:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:11.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:19:11 compute-0 ceph-mon[74335]: pgmap v777: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 3.9 KiB/s wr, 4 op/s
Jan 23 10:19:12 compute-0 nova_compute[249229]: 2026-01-23 10:19:12.333 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 7.9 KiB/s wr, 5 op/s
Jan 23 10:19:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:19:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:12.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:19:13 compute-0 ceph-mon[74335]: pgmap v778: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 7.9 KiB/s wr, 5 op/s
Jan 23 10:19:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:13 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:13 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:13 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:13 compute-0 podman[258765]: 2026-01-23 10:19:13.555309678 +0000 UTC m=+0.076264665 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 10:19:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:13.635Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:13.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 5.1 KiB/s wr, 2 op/s
Jan 23 10:19:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:14.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:14 compute-0 nova_compute[249229]: 2026-01-23 10:19:14.771 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:15 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4001140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:15 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:15 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:15 compute-0 ceph-mon[74335]: pgmap v779: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 5.1 KiB/s wr, 2 op/s
Jan 23 10:19:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:15.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:15 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:19:15.850 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:19:16 compute-0 sudo[258795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:19:16 compute-0 sudo[258795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:16 compute-0 sudo[258795]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:16 compute-0 sudo[258820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 23 10:19:16 compute-0 sudo[258820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 6.2 KiB/s wr, 30 op/s
Jan 23 10:19:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 23 10:19:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:16.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 23 10:19:16 compute-0 sudo[258820]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:19:16 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:19:16 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:16 compute-0 sudo[258867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:19:16 compute-0 sudo[258867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:16 compute-0 sudo[258867]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:16 compute-0 sudo[258892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:19:16 compute-0 sudo[258892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:17.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:17 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:17 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4001c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:17 compute-0 nova_compute[249229]: 2026-01-23 10:19:17.334 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:17 compute-0 sudo[258892]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:19:17 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:19:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:19:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:19:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:19:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:17 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:19:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:19:17 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:19:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:19:17 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:19:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:19:17 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:19:17 compute-0 sudo[258948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:19:17 compute-0 sudo[258948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:17 compute-0 sudo[258948]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:17 compute-0 sudo[258973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:19:17 compute-0 sudo[258973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:17 compute-0 ceph-mon[74335]: pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 6.2 KiB/s wr, 30 op/s
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/613545648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:19:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:19:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:19:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:17.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:19:18 compute-0 podman[259041]: 2026-01-23 10:19:18.060854751 +0000 UTC m=+0.041751466 container create 486419f8c848244d20e7970f930f8badc953c027a8523c49f896a7e8e3769fb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wescoff, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Jan 23 10:19:18 compute-0 systemd[1]: Started libpod-conmon-486419f8c848244d20e7970f930f8badc953c027a8523c49f896a7e8e3769fb6.scope.
Jan 23 10:19:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:19:18 compute-0 podman[259041]: 2026-01-23 10:19:18.044971584 +0000 UTC m=+0.025868319 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:19:18 compute-0 podman[259041]: 2026-01-23 10:19:18.147371725 +0000 UTC m=+0.128268450 container init 486419f8c848244d20e7970f930f8badc953c027a8523c49f896a7e8e3769fb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:19:18 compute-0 podman[259041]: 2026-01-23 10:19:18.158296137 +0000 UTC m=+0.139192852 container start 486419f8c848244d20e7970f930f8badc953c027a8523c49f896a7e8e3769fb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:19:18 compute-0 podman[259041]: 2026-01-23 10:19:18.161674962 +0000 UTC m=+0.142571717 container attach 486419f8c848244d20e7970f930f8badc953c027a8523c49f896a7e8e3769fb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wescoff, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:19:18 compute-0 focused_wescoff[259058]: 167 167
Jan 23 10:19:18 compute-0 systemd[1]: libpod-486419f8c848244d20e7970f930f8badc953c027a8523c49f896a7e8e3769fb6.scope: Deactivated successfully.
Jan 23 10:19:18 compute-0 podman[259041]: 2026-01-23 10:19:18.167708871 +0000 UTC m=+0.148605586 container died 486419f8c848244d20e7970f930f8badc953c027a8523c49f896a7e8e3769fb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wescoff, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 10:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-79a3b644545808e2454dd9cf05b67c32c599b41aa96db27ecd5e30b1d72d9d70-merged.mount: Deactivated successfully.
Jan 23 10:19:18 compute-0 podman[259041]: 2026-01-23 10:19:18.209305821 +0000 UTC m=+0.190202536 container remove 486419f8c848244d20e7970f930f8badc953c027a8523c49f896a7e8e3769fb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 10:19:18 compute-0 systemd[1]: libpod-conmon-486419f8c848244d20e7970f930f8badc953c027a8523c49f896a7e8e3769fb6.scope: Deactivated successfully.
Jan 23 10:19:18 compute-0 podman[259081]: 2026-01-23 10:19:18.379683488 +0000 UTC m=+0.043543933 container create 23c67bd2d1976f30f3c4925da9a9c4d318b57edd0b39d2069879ff46c305cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 10:19:18 compute-0 systemd[1]: Started libpod-conmon-23c67bd2d1976f30f3c4925da9a9c4d318b57edd0b39d2069879ff46c305cdb7.scope.
Jan 23 10:19:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281770ea891c0794c22009e55e61b277908ff509634a160b19cb049699bdb89c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281770ea891c0794c22009e55e61b277908ff509634a160b19cb049699bdb89c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281770ea891c0794c22009e55e61b277908ff509634a160b19cb049699bdb89c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281770ea891c0794c22009e55e61b277908ff509634a160b19cb049699bdb89c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281770ea891c0794c22009e55e61b277908ff509634a160b19cb049699bdb89c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:18 compute-0 podman[259081]: 2026-01-23 10:19:18.361803909 +0000 UTC m=+0.025664374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:19:18 compute-0 podman[259081]: 2026-01-23 10:19:18.460229885 +0000 UTC m=+0.124090350 container init 23c67bd2d1976f30f3c4925da9a9c4d318b57edd0b39d2069879ff46c305cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_driscoll, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 23 10:19:18 compute-0 podman[259081]: 2026-01-23 10:19:18.469387722 +0000 UTC m=+0.133248187 container start 23c67bd2d1976f30f3c4925da9a9c4d318b57edd0b39d2069879ff46c305cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 10:19:18 compute-0 podman[259081]: 2026-01-23 10:19:18.47315866 +0000 UTC m=+0.137019125 container attach 23c67bd2d1976f30f3c4925da9a9c4d318b57edd0b39d2069879ff46c305cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_driscoll, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 10:19:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 23 10:19:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000063s ======
Jan 23 10:19:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:18.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 23 10:19:18 compute-0 blissful_driscoll[259097]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:19:18 compute-0 blissful_driscoll[259097]: --> All data devices are unavailable
Jan 23 10:19:18 compute-0 systemd[1]: libpod-23c67bd2d1976f30f3c4925da9a9c4d318b57edd0b39d2069879ff46c305cdb7.scope: Deactivated successfully.
Jan 23 10:19:18 compute-0 podman[259081]: 2026-01-23 10:19:18.814897402 +0000 UTC m=+0.478757857 container died 23c67bd2d1976f30f3c4925da9a9c4d318b57edd0b39d2069879ff46c305cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_driscoll, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-281770ea891c0794c22009e55e61b277908ff509634a160b19cb049699bdb89c-merged.mount: Deactivated successfully.
Jan 23 10:19:18 compute-0 podman[259081]: 2026-01-23 10:19:18.852086265 +0000 UTC m=+0.515946710 container remove 23c67bd2d1976f30f3c4925da9a9c4d318b57edd0b39d2069879ff46c305cdb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 10:19:18 compute-0 systemd[1]: libpod-conmon-23c67bd2d1976f30f3c4925da9a9c4d318b57edd0b39d2069879ff46c305cdb7.scope: Deactivated successfully.
Jan 23 10:19:18 compute-0 sudo[258973]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:18 compute-0 sudo[259125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:19:18 compute-0 sudo[259125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:18 compute-0 sudo[259125]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:19 compute-0 sudo[259150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:19:19 compute-0 sudo[259150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:19 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:19 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:19 compute-0 podman[259216]: 2026-01-23 10:19:19.426828481 +0000 UTC m=+0.040665472 container create a9804db06c5d573c3cbf52d0372ba48248fe70b7f9121ae8ec2c7ffc8e7db45f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_swirles, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:19:19 compute-0 systemd[1]: Started libpod-conmon-a9804db06c5d573c3cbf52d0372ba48248fe70b7f9121ae8ec2c7ffc8e7db45f.scope.
Jan 23 10:19:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:19:19 compute-0 podman[259216]: 2026-01-23 10:19:19.495091665 +0000 UTC m=+0.108928666 container init a9804db06c5d573c3cbf52d0372ba48248fe70b7f9121ae8ec2c7ffc8e7db45f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_swirles, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:19:19 compute-0 podman[259216]: 2026-01-23 10:19:19.501030631 +0000 UTC m=+0.114867622 container start a9804db06c5d573c3cbf52d0372ba48248fe70b7f9121ae8ec2c7ffc8e7db45f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:19:19 compute-0 podman[259216]: 2026-01-23 10:19:19.503490088 +0000 UTC m=+0.117327079 container attach a9804db06c5d573c3cbf52d0372ba48248fe70b7f9121ae8ec2c7ffc8e7db45f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 10:19:19 compute-0 hungry_swirles[259230]: 167 167
Jan 23 10:19:19 compute-0 podman[259216]: 2026-01-23 10:19:19.410845941 +0000 UTC m=+0.024682962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:19:19 compute-0 systemd[1]: libpod-a9804db06c5d573c3cbf52d0372ba48248fe70b7f9121ae8ec2c7ffc8e7db45f.scope: Deactivated successfully.
Jan 23 10:19:19 compute-0 podman[259216]: 2026-01-23 10:19:19.505540692 +0000 UTC m=+0.119377693 container died a9804db06c5d573c3cbf52d0372ba48248fe70b7f9121ae8ec2c7ffc8e7db45f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_swirles, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:19:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:19 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f1285d98bbce1a62c8d9f81e4a959eaa57d97ea5129639401d6682193c32b28-merged.mount: Deactivated successfully.
Jan 23 10:19:19 compute-0 podman[259216]: 2026-01-23 10:19:19.533298809 +0000 UTC m=+0.147135800 container remove a9804db06c5d573c3cbf52d0372ba48248fe70b7f9121ae8ec2c7ffc8e7db45f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:19:19 compute-0 systemd[1]: libpod-conmon-a9804db06c5d573c3cbf52d0372ba48248fe70b7f9121ae8ec2c7ffc8e7db45f.scope: Deactivated successfully.
Jan 23 10:19:19 compute-0 podman[259255]: 2026-01-23 10:19:19.692186766 +0000 UTC m=+0.043549902 container create 0494829bc53c18bccc08395bf4515abbdd695a154e34e476561a520a5300d0fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:19:19 compute-0 systemd[1]: Started libpod-conmon-0494829bc53c18bccc08395bf4515abbdd695a154e34e476561a520a5300d0fd.scope.
Jan 23 10:19:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e69a38c0f07cd0389042b17c78128421530f1adb34f39cba7555b8056d59deb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e69a38c0f07cd0389042b17c78128421530f1adb34f39cba7555b8056d59deb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e69a38c0f07cd0389042b17c78128421530f1adb34f39cba7555b8056d59deb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e69a38c0f07cd0389042b17c78128421530f1adb34f39cba7555b8056d59deb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:19 compute-0 podman[259255]: 2026-01-23 10:19:19.756797066 +0000 UTC m=+0.108160232 container init 0494829bc53c18bccc08395bf4515abbdd695a154e34e476561a520a5300d0fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:19:19 compute-0 podman[259255]: 2026-01-23 10:19:19.765191568 +0000 UTC m=+0.116554704 container start 0494829bc53c18bccc08395bf4515abbdd695a154e34e476561a520a5300d0fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_haibt, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:19:19 compute-0 podman[259255]: 2026-01-23 10:19:19.768003706 +0000 UTC m=+0.119366852 container attach 0494829bc53c18bccc08395bf4515abbdd695a154e34e476561a520a5300d0fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 10:19:19 compute-0 podman[259255]: 2026-01-23 10:19:19.676689632 +0000 UTC m=+0.028052788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:19:19 compute-0 nova_compute[249229]: 2026-01-23 10:19:19.810 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:19 compute-0 ceph-mon[74335]: pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 23 10:19:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:19.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:19] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:19:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:19] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:19:19
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', '.mgr', 'images', 'vms', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.nfs', 'default.rgw.log', 'backups', 'default.rgw.meta']
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:19:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:19:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]: {
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:     "1": [
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:         {
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "devices": [
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "/dev/loop3"
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             ],
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "lv_name": "ceph_lv0",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "lv_size": "21470642176",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "name": "ceph_lv0",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "tags": {
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.cluster_name": "ceph",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.crush_device_class": "",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.encrypted": "0",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.osd_id": "1",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.type": "block",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.vdo": "0",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:                 "ceph.with_tpm": "0"
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             },
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "type": "block",
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:             "vg_name": "ceph_vg0"
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:         }
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]:     ]
Jan 23 10:19:20 compute-0 eloquent_haibt[259271]: }
Jan 23 10:19:20 compute-0 systemd[1]: libpod-0494829bc53c18bccc08395bf4515abbdd695a154e34e476561a520a5300d0fd.scope: Deactivated successfully.
Jan 23 10:19:20 compute-0 podman[259255]: 2026-01-23 10:19:20.090149936 +0000 UTC m=+0.441513082 container died 0494829bc53c18bccc08395bf4515abbdd695a154e34e476561a520a5300d0fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e69a38c0f07cd0389042b17c78128421530f1adb34f39cba7555b8056d59deb-merged.mount: Deactivated successfully.
Jan 23 10:19:20 compute-0 podman[259255]: 2026-01-23 10:19:20.130179258 +0000 UTC m=+0.481542394 container remove 0494829bc53c18bccc08395bf4515abbdd695a154e34e476561a520a5300d0fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:19:20 compute-0 systemd[1]: libpod-conmon-0494829bc53c18bccc08395bf4515abbdd695a154e34e476561a520a5300d0fd.scope: Deactivated successfully.
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:19:20 compute-0 sudo[259150]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:20 compute-0 sudo[259292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:19:20 compute-0 sudo[259292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:20 compute-0 sudo[259292]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:20 compute-0 sudo[259317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:19:20 compute-0 sudo[259317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:19:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 23 10:19:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:20.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:20 compute-0 podman[259385]: 2026-01-23 10:19:20.707098681 +0000 UTC m=+0.043421398 container create 18c92fccb979e0a09d5135c8be99df415377ef7751c03030735618d6e03d518b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_khorana, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:19:20 compute-0 systemd[1]: Started libpod-conmon-18c92fccb979e0a09d5135c8be99df415377ef7751c03030735618d6e03d518b.scope.
Jan 23 10:19:20 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:19:20 compute-0 podman[259385]: 2026-01-23 10:19:20.78190088 +0000 UTC m=+0.118223627 container init 18c92fccb979e0a09d5135c8be99df415377ef7751c03030735618d6e03d518b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_khorana, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:19:20 compute-0 podman[259385]: 2026-01-23 10:19:20.688067336 +0000 UTC m=+0.024390083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:19:20 compute-0 podman[259385]: 2026-01-23 10:19:20.787763363 +0000 UTC m=+0.124086090 container start 18c92fccb979e0a09d5135c8be99df415377ef7751c03030735618d6e03d518b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 10:19:20 compute-0 podman[259385]: 2026-01-23 10:19:20.790601202 +0000 UTC m=+0.126923959 container attach 18c92fccb979e0a09d5135c8be99df415377ef7751c03030735618d6e03d518b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 10:19:20 compute-0 adoring_khorana[259401]: 167 167
Jan 23 10:19:20 compute-0 systemd[1]: libpod-18c92fccb979e0a09d5135c8be99df415377ef7751c03030735618d6e03d518b.scope: Deactivated successfully.
Jan 23 10:19:20 compute-0 podman[259385]: 2026-01-23 10:19:20.792891743 +0000 UTC m=+0.129214480 container died 18c92fccb979e0a09d5135c8be99df415377ef7751c03030735618d6e03d518b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc99aab4804f81c7ba6a3c040eb6dad473a382926d9c7eea11a42dca8efe715-merged.mount: Deactivated successfully.
Jan 23 10:19:20 compute-0 podman[259385]: 2026-01-23 10:19:20.826647168 +0000 UTC m=+0.162969895 container remove 18c92fccb979e0a09d5135c8be99df415377ef7751c03030735618d6e03d518b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 10:19:20 compute-0 systemd[1]: libpod-conmon-18c92fccb979e0a09d5135c8be99df415377ef7751c03030735618d6e03d518b.scope: Deactivated successfully.
Jan 23 10:19:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:19:20 compute-0 podman[259424]: 2026-01-23 10:19:20.972459426 +0000 UTC m=+0.038668779 container create eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 10:19:21 compute-0 systemd[1]: Started libpod-conmon-eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1.scope.
Jan 23 10:19:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b177ad35278ca78e7eee8fb8e78c84b39814c5863f42c59891d74cd74ec383b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b177ad35278ca78e7eee8fb8e78c84b39814c5863f42c59891d74cd74ec383b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b177ad35278ca78e7eee8fb8e78c84b39814c5863f42c59891d74cd74ec383b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b177ad35278ca78e7eee8fb8e78c84b39814c5863f42c59891d74cd74ec383b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:19:21 compute-0 podman[259424]: 2026-01-23 10:19:21.043691943 +0000 UTC m=+0.109901296 container init eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_golick, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 23 10:19:21 compute-0 podman[259424]: 2026-01-23 10:19:21.050838097 +0000 UTC m=+0.117047450 container start eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_golick, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 23 10:19:21 compute-0 podman[259424]: 2026-01-23 10:19:20.957244251 +0000 UTC m=+0.023453624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:19:21 compute-0 podman[259424]: 2026-01-23 10:19:21.054854632 +0000 UTC m=+0.121064035 container attach eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_golick, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 10:19:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:21 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:21 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:21 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:21 compute-0 lvm[259515]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:19:21 compute-0 lvm[259515]: VG ceph_vg0 finished
Jan 23 10:19:21 compute-0 tender_golick[259441]: {}
Jan 23 10:19:21 compute-0 systemd[1]: libpod-eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1.scope: Deactivated successfully.
Jan 23 10:19:21 compute-0 systemd[1]: libpod-eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1.scope: Consumed 1.078s CPU time.
Jan 23 10:19:21 compute-0 podman[259424]: 2026-01-23 10:19:21.747927337 +0000 UTC m=+0.814136700 container died eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_golick, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:19:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b177ad35278ca78e7eee8fb8e78c84b39814c5863f42c59891d74cd74ec383b6-merged.mount: Deactivated successfully.
Jan 23 10:19:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:21 compute-0 podman[259424]: 2026-01-23 10:19:21.792879893 +0000 UTC m=+0.859089246 container remove eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_golick, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:19:21 compute-0 systemd[1]: libpod-conmon-eedd7b0133f1dea0be0cab7897cb280ad76657db08914fba276dcc6a57abade1.scope: Deactivated successfully.
Jan 23 10:19:21 compute-0 sudo[259317]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:19:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:19:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:21.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:21 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:21 compute-0 ceph-mon[74335]: pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 23 10:19:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:19:21 compute-0 sudo[259533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:19:21 compute-0 sudo[259533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:21 compute-0 sudo[259533]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:22 compute-0 nova_compute[249229]: 2026-01-23 10:19:22.384 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 23 10:19:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:22.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:22 compute-0 ceph-mon[74335]: pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 23 10:19:23 compute-0 podman[259559]: 2026-01-23 10:19:23.275265672 +0000 UTC m=+0.051541232 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 10:19:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:23 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:23 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:23 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:23.637Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:19:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:23.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:19:23 compute-0 sudo[259579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:19:23 compute-0 sudo[259579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:23 compute-0 sudo[259579]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:19:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:24 compute-0 nova_compute[249229]: 2026-01-23 10:19:24.813 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:25 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:25 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:25 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:25 compute-0 ceph-mon[74335]: pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:19:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:25.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:19:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:27.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:27 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:27 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:27 compute-0 nova_compute[249229]: 2026-01-23 10:19:27.443 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:27 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:27 compute-0 ceph-mon[74335]: pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:19:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:27.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:19:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:28.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:19:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:29 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:29 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:29 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 23 10:19:29 compute-0 nova_compute[249229]: 2026-01-23 10:19:29.863 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:29.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 23 10:19:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:29] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Jan 23 10:19:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:29] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Jan 23 10:19:30 compute-0 ceph-mon[74335]: pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:30.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:31 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:31 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:31 compute-0 ceph-mon[74335]: pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:31 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:19:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:31.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:19:32 compute-0 nova_compute[249229]: 2026-01-23 10:19:32.445 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:32.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:33 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:33 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:33 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:33.638Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:33 compute-0 ceph-mon[74335]: pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:33.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:34.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:34 compute-0 nova_compute[249229]: 2026-01-23 10:19:34.908 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:19:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:19:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:35 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:35 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:35 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:35 compute-0 ceph-mon[74335]: pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:19:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:19:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:35.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:19:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:19:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:36.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:37.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:37 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:37 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:37 compute-0 nova_compute[249229]: 2026-01-23 10:19:37.478 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:37 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:37 compute-0 ceph-mon[74335]: pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:19:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:37.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:19:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:19:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:39 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:39 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:39 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:39.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:39 compute-0 nova_compute[249229]: 2026-01-23 10:19:39.936 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:39] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:19:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:39] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:19:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:40.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:40 compute-0 ceph-mon[74335]: pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:41 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:41 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:41 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:19:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:41.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:19:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:19:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 9331 writes, 35K keys, 9331 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9331 writes, 2167 syncs, 4.31 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1449 writes, 4381 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 4.46 MB, 0.01 MB/s
                                           Interval WAL: 1449 writes, 617 syncs, 2.35 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 10:19:42 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3401891283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:19:42 compute-0 ceph-mon[74335]: pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:19:42 compute-0 nova_compute[249229]: 2026-01-23 10:19:42.535 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:19:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:19:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:42.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:19:43 compute-0 ceph-mon[74335]: pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:19:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:43 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:43 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:43 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:43.639Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:19:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:43.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:19:43 compute-0 sudo[259624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:19:44 compute-0 sudo[259624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:19:44 compute-0 sudo[259624]: pam_unix(sudo:session): session closed for user root
Jan 23 10:19:44 compute-0 podman[259648]: 2026-01-23 10:19:44.102549703 +0000 UTC m=+0.093658868 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 10:19:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:19:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:44 compute-0 nova_compute[249229]: 2026-01-23 10:19:44.979 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:45 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:45 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:45 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:45 compute-0 ceph-mon[74335]: pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 23 10:19:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:45.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:19:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:19:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:46.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:19:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:47.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:47 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:47 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:47 compute-0 nova_compute[249229]: 2026-01-23 10:19:47.537 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:47 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:47.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:48 compute-0 ceph-mon[74335]: pgmap v795: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:19:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2913217029' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:19:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3310066042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:19:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:19:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:19:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2936068880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:19:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:19:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2936068880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:19:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:48.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:49 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:49 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:49 compute-0 ceph-mon[74335]: pgmap v796: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:19:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2936068880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:19:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2936068880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:19:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:49 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:19:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:49.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:19:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:49] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:19:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:49] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:19:49 compute-0 nova_compute[249229]: 2026-01-23 10:19:49.982 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:19:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:19:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:19:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:19:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:19:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:19:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:19:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:19:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:19:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:19:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:50.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:51 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:51 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:51 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4001510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:51 compute-0 ceph-mon[74335]: pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:19:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:19:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:51.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:19:52 compute-0 nova_compute[249229]: 2026-01-23 10:19:52.582 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 23 10:19:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:52.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80025a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:53 compute-0 podman[259686]: 2026-01-23 10:19:53.573784846 +0000 UTC m=+0.096968662 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 10:19:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:53.640Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:53 compute-0 ceph-mon[74335]: pgmap v798: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 23 10:19:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:19:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:53.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:19:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 23 10:19:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:54.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:54 compute-0 nova_compute[249229]: 2026-01-23 10:19:54.984 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:55 compute-0 ceph-mon[74335]: pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 23 10:19:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:55 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4001510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:55 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:55 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80025a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:55.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:19:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:56.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:19:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:19:57.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:19:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:57 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:57 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:57 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:57 compute-0 nova_compute[249229]: 2026-01-23 10:19:57.585 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:19:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:19:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:57.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:19:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 10:19:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:58.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:59 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc002af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:59 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:19:59 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:19:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:19:59.773 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:19:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:19:59.774 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:19:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:19:59.774 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:19:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:19:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:19:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:59.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:19:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:59] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Jan 23 10:19:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:19:59] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Jan 23 10:19:59 compute-0 nova_compute[249229]: 2026-01-23 10:19:59.987 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:20:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 10:20:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:00.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:00 compute-0 nova_compute[249229]: 2026-01-23 10:20:00.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:00 compute-0 nova_compute[249229]: 2026-01-23 10:20:00.738 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:00 compute-0 nova_compute[249229]: 2026-01-23 10:20:00.739 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:00 compute-0 nova_compute[249229]: 2026-01-23 10:20:00.739 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:00 compute-0 nova_compute[249229]: 2026-01-23 10:20:00.739 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:20:00 compute-0 nova_compute[249229]: 2026-01-23 10:20:00.740 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:20:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668936281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.203 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:01 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.344 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.346 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4592MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.346 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.346 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:01 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.421 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.422 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.436 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:01 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:20:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233825662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:01.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.903 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.910 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.935 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.936 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:20:01 compute-0 nova_compute[249229]: 2026-01-23 10:20:01.937 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 81 op/s
Jan 23 10:20:02 compute-0 nova_compute[249229]: 2026-01-23 10:20:02.628 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:02.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:03 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:03 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:03 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:03.641Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:04 compute-0 sudo[259761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:20:04 compute-0 sudo[259761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:04 compute-0 sudo[259761]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 709 KiB/s rd, 28 op/s
Jan 23 10:20:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:04.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:04 compute-0 nova_compute[249229]: 2026-01-23 10:20:04.937 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:04 compute-0 nova_compute[249229]: 2026-01-23 10:20:04.938 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:20:04 compute-0 nova_compute[249229]: 2026-01-23 10:20:04.938 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:20:04 compute-0 nova_compute[249229]: 2026-01-23 10:20:04.953 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:20:04 compute-0 nova_compute[249229]: 2026-01-23 10:20:04.953 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:04 compute-0 nova_compute[249229]: 2026-01-23 10:20:04.953 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:04 compute-0 nova_compute[249229]: 2026-01-23 10:20:04.954 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:04 compute-0 nova_compute[249229]: 2026-01-23 10:20:04.954 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:04 compute-0 nova_compute[249229]: 2026-01-23 10:20:04.954 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:20:05 compute-0 nova_compute[249229]: 2026-01-23 10:20:05.028 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:20:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:20:05 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 23 10:20:05 compute-0 ceph-mon[74335]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Jan 23 10:20:05 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 10:20:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:05 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:05 compute-0 ceph-mds[94628]: mds.beacon.cephfs.compute-0.ymknms missed beacon ack from the monitors
Jan 23 10:20:05 compute-0 nova_compute[249229]: 2026-01-23 10:20:05.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:05 compute-0 nova_compute[249229]: 2026-01-23 10:20:05.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:05.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:06 compute-0 ceph-mon[74335]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : monmap epoch 3
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : last_changed 2026-01-23T09:50:47.540109+0000
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : created 2026-01-23T09:47:35.499222+0000
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 23 10:20:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 2 up:standby
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.nbdygh(active, since 25m), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Jan 23 10:20:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1281217101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:20:06 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2381106667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 92 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 712 KiB/s rd, 588 KiB/s wr, 37 op/s
Jan 23 10:20:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:06.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:07 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:07 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:07 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:07 compute-0 nova_compute[249229]: 2026-01-23 10:20:07.630 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:07.759Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:07.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 92 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 588 KiB/s wr, 15 op/s
Jan 23 10:20:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:08.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:08 compute-0 ceph-mon[74335]: pgmap v801: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 10:20:08 compute-0 ceph-mon[74335]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:20:08 compute-0 ceph-mon[74335]: pgmap v802: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 10:20:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1668936281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2233825662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:08 compute-0 ceph-mon[74335]: pgmap v803: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 81 op/s
Jan 23 10:20:08 compute-0 ceph-mon[74335]: pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 709 KiB/s rd, 28 op/s
Jan 23 10:20:08 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:20:08 compute-0 ceph-mon[74335]: mon.compute-1 calling monitor election
Jan 23 10:20:08 compute-0 ceph-mon[74335]: mon.compute-0 calling monitor election
Jan 23 10:20:08 compute-0 ceph-mon[74335]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 23 10:20:08 compute-0 ceph-mon[74335]: monmap epoch 3
Jan 23 10:20:08 compute-0 ceph-mon[74335]: fsid f3005f84-239a-55b6-a948-8f1fb592b920
Jan 23 10:20:08 compute-0 ceph-mon[74335]: last_changed 2026-01-23T09:50:47.540109+0000
Jan 23 10:20:08 compute-0 ceph-mon[74335]: created 2026-01-23T09:47:35.499222+0000
Jan 23 10:20:08 compute-0 ceph-mon[74335]: min_mon_release 19 (squid)
Jan 23 10:20:08 compute-0 ceph-mon[74335]: election_strategy: 1
Jan 23 10:20:08 compute-0 ceph-mon[74335]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 23 10:20:08 compute-0 ceph-mon[74335]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 23 10:20:08 compute-0 ceph-mon[74335]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 23 10:20:08 compute-0 ceph-mon[74335]: fsmap cephfs:1 {0=cephfs.compute-2.prgzmm=up:active} 2 up:standby
Jan 23 10:20:08 compute-0 ceph-mon[74335]: osdmap e146: 3 total, 3 up, 3 in
Jan 23 10:20:08 compute-0 ceph-mon[74335]: mgrmap e32: compute-0.nbdygh(active, since 25m), standbys: compute-2.uczrot, compute-1.jmakme
Jan 23 10:20:08 compute-0 ceph-mon[74335]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:20:08 compute-0 ceph-mon[74335]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:20:08 compute-0 ceph-mon[74335]:      osd.1 observed slow operation indications in BlueStore
Jan 23 10:20:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1281217101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2381106667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:08 compute-0 ceph-mon[74335]: pgmap v805: 353 pgs: 353 active+clean; 92 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 712 KiB/s rd, 588 KiB/s wr, 37 op/s
Jan 23 10:20:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:09 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:09 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8003500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:09 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:09 compute-0 nova_compute[249229]: 2026-01-23 10:20:09.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:09.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:09] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 23 10:20:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:09] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 23 10:20:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1702015841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1764227055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:09 compute-0 ceph-mon[74335]: pgmap v806: 353 pgs: 353 active+clean; 92 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 588 KiB/s wr, 15 op/s
Jan 23 10:20:10 compute-0 nova_compute[249229]: 2026-01-23 10:20:10.032 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 92 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 588 KiB/s wr, 15 op/s
Jan 23 10:20:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:10.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:11 compute-0 ceph-mon[74335]: pgmap v807: 353 pgs: 353 active+clean; 92 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 588 KiB/s wr, 15 op/s
Jan 23 10:20:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:11 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:11 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:11 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8003500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:20:12 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check update: 2 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 23 10:20:12 compute-0 nova_compute[249229]: 2026-01-23 10:20:12.672 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:12.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:13 compute-0 ceph-mon[74335]: Health check update: 2 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 23 10:20:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:13 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:13 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:13 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:13.643Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:20:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:13.643Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:13.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:14 compute-0 ceph-mon[74335]: pgmap v808: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:20:14 compute-0 podman[259796]: 2026-01-23 10:20:14.58812235 +0000 UTC m=+0.102965948 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 23 10:20:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 23 10:20:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:14.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:15 compute-0 nova_compute[249229]: 2026-01-23 10:20:15.034 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:15 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:15 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:15 compute-0 ceph-mon[74335]: pgmap v809: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 23 10:20:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:15 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:15.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 300 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 23 10:20:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:16.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:17 compute-0 ceph-mon[74335]: pgmap v810: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 300 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 23 10:20:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:17 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:17 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8003500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:17 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:17 compute-0 nova_compute[249229]: 2026-01-23 10:20:17.674 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:17.760Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:17.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:18 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:18.565 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:20:18 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:18.565 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:20:18 compute-0 nova_compute[249229]: 2026-01-23 10:20:18.598 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Jan 23 10:20:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:18.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:19 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:19 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:19 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:19 compute-0 ceph-mon[74335]: pgmap v811: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Jan 23 10:20:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:19.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:19] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 23 10:20:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:19] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:20:20
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.meta', '.nfs', 'images', 'vms']
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:20:20 compute-0 nova_compute[249229]: 2026-01-23 10:20:20.037 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:20:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:20:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Jan 23 10:20:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:20.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:20:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:21 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:21 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:21 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c80013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:21.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:22 compute-0 ceph-mon[74335]: pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Jan 23 10:20:22 compute-0 sudo[259833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:20:22 compute-0 sudo[259833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:22 compute-0 sudo[259833]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:22 compute-0 sudo[259858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:20:22 compute-0 sudo[259858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:20:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Jan 23 10:20:22 compute-0 nova_compute[249229]: 2026-01-23 10:20:22.675 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:22.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:22 compute-0 sudo[259858]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:22 compute-0 sudo[259916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:20:22 compute-0 sudo[259916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:22 compute-0 sudo[259916]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:20:22 compute-0 sudo[259941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- inventory --format=json-pretty --filter-for-batch
Jan 23 10:20:22 compute-0 sudo[259941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:23 compute-0 podman[260006]: 2026-01-23 10:20:23.294996787 +0000 UTC m=+0.042874261 container create ad70ab563c65a742113dde5ff919f5c8e330710ee743846d34da875a4f874da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 10:20:23 compute-0 systemd[1]: Started libpod-conmon-ad70ab563c65a742113dde5ff919f5c8e330710ee743846d34da875a4f874da1.scope.
Jan 23 10:20:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:23 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:20:23 compute-0 podman[260006]: 2026-01-23 10:20:23.273922348 +0000 UTC m=+0.021799872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:20:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:23 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8003500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:23 compute-0 podman[260006]: 2026-01-23 10:20:23.375561676 +0000 UTC m=+0.123439170 container init ad70ab563c65a742113dde5ff919f5c8e330710ee743846d34da875a4f874da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_beaver, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 10:20:23 compute-0 podman[260006]: 2026-01-23 10:20:23.381845118 +0000 UTC m=+0.129722602 container start ad70ab563c65a742113dde5ff919f5c8e330710ee743846d34da875a4f874da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_beaver, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 10:20:23 compute-0 podman[260006]: 2026-01-23 10:20:23.385622397 +0000 UTC m=+0.133499871 container attach ad70ab563c65a742113dde5ff919f5c8e330710ee743846d34da875a4f874da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:20:23 compute-0 nice_beaver[260022]: 167 167
Jan 23 10:20:23 compute-0 systemd[1]: libpod-ad70ab563c65a742113dde5ff919f5c8e330710ee743846d34da875a4f874da1.scope: Deactivated successfully.
Jan 23 10:20:23 compute-0 podman[260006]: 2026-01-23 10:20:23.388929133 +0000 UTC m=+0.136806607 container died ad70ab563c65a742113dde5ff919f5c8e330710ee743846d34da875a4f874da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 10:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae825b313a92d69534fc3450db00bde756c6ce28c27ad13bed1fce94327be983-merged.mount: Deactivated successfully.
Jan 23 10:20:23 compute-0 podman[260006]: 2026-01-23 10:20:23.425265003 +0000 UTC m=+0.173142477 container remove ad70ab563c65a742113dde5ff919f5c8e330710ee743846d34da875a4f874da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 10:20:23 compute-0 systemd[1]: libpod-conmon-ad70ab563c65a742113dde5ff919f5c8e330710ee743846d34da875a4f874da1.scope: Deactivated successfully.
Jan 23 10:20:23 compute-0 podman[260044]: 2026-01-23 10:20:23.573666324 +0000 UTC m=+0.037141525 container create e2df7cd52045c92bd89b7971cdb6c56dd5f22940cfc83d1495333f1a2b329a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:20:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:23 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:23 compute-0 systemd[1]: Started libpod-conmon-e2df7cd52045c92bd89b7971cdb6c56dd5f22940cfc83d1495333f1a2b329a70.scope.
Jan 23 10:20:23 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815c2f8be39d944ca1fa463b9468ae869557078e93c6b28825055f85c6656899/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815c2f8be39d944ca1fa463b9468ae869557078e93c6b28825055f85c6656899/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815c2f8be39d944ca1fa463b9468ae869557078e93c6b28825055f85c6656899/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815c2f8be39d944ca1fa463b9468ae869557078e93c6b28825055f85c6656899/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:23.644Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:20:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:23.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:20:23 compute-0 podman[260044]: 2026-01-23 10:20:23.653548664 +0000 UTC m=+0.117023865 container init e2df7cd52045c92bd89b7971cdb6c56dd5f22940cfc83d1495333f1a2b329a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 10:20:23 compute-0 podman[260044]: 2026-01-23 10:20:23.558064473 +0000 UTC m=+0.021539704 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:20:23 compute-0 podman[260044]: 2026-01-23 10:20:23.664776619 +0000 UTC m=+0.128251820 container start e2df7cd52045c92bd89b7971cdb6c56dd5f22940cfc83d1495333f1a2b329a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:20:23 compute-0 podman[260044]: 2026-01-23 10:20:23.668712642 +0000 UTC m=+0.132187843 container attach e2df7cd52045c92bd89b7971cdb6c56dd5f22940cfc83d1495333f1a2b329a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:20:23 compute-0 podman[260062]: 2026-01-23 10:20:23.733329051 +0000 UTC m=+0.096252054 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 10:20:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:23.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:23 compute-0 ceph-mon[74335]: pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Jan 23 10:20:23 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:23 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:24 compute-0 sudo[260259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:20:24 compute-0 sudo[260259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:24 compute-0 sudo[260259]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:24 compute-0 recursing_fermi[260060]: [
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:     {
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         "available": false,
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         "being_replaced": false,
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         "ceph_device_lvm": false,
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         "lsm_data": {},
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         "lvs": [],
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         "path": "/dev/sr0",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         "rejected_reasons": [
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "Has a FileSystem",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "Insufficient space (<5GB)"
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         ],
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         "sys_api": {
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "actuators": null,
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "device_nodes": [
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:                 "sr0"
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             ],
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "devname": "sr0",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "human_readable_size": "482.00 KB",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "id_bus": "ata",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "model": "QEMU DVD-ROM",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "nr_requests": "2",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "parent": "/dev/sr0",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "partitions": {},
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "path": "/dev/sr0",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "removable": "1",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "rev": "2.5+",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "ro": "0",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "rotational": "1",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "sas_address": "",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "sas_device_handle": "",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "scheduler_mode": "mq-deadline",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "sectors": 0,
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "sectorsize": "2048",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "size": 493568.0,
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "support_discard": "2048",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "type": "disk",
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:             "vendor": "QEMU"
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:         }
Jan 23 10:20:24 compute-0 recursing_fermi[260060]:     }
Jan 23 10:20:24 compute-0 recursing_fermi[260060]: ]
Jan 23 10:20:24 compute-0 systemd[1]: libpod-e2df7cd52045c92bd89b7971cdb6c56dd5f22940cfc83d1495333f1a2b329a70.scope: Deactivated successfully.
Jan 23 10:20:24 compute-0 podman[261265]: 2026-01-23 10:20:24.364442139 +0000 UTC m=+0.028069063 container died e2df7cd52045c92bd89b7971cdb6c56dd5f22940cfc83d1495333f1a2b329a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-815c2f8be39d944ca1fa463b9468ae869557078e93c6b28825055f85c6656899-merged.mount: Deactivated successfully.
Jan 23 10:20:24 compute-0 podman[261265]: 2026-01-23 10:20:24.405552397 +0000 UTC m=+0.069179231 container remove e2df7cd52045c92bd89b7971cdb6c56dd5f22940cfc83d1495333f1a2b329a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:20:24 compute-0 systemd[1]: libpod-conmon-e2df7cd52045c92bd89b7971cdb6c56dd5f22940cfc83d1495333f1a2b329a70.scope: Deactivated successfully.
Jan 23 10:20:24 compute-0 sudo[259941]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:20:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:20:24 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:24.568 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:20:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 15 KiB/s wr, 0 op/s
Jan 23 10:20:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:24.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:24 compute-0 nova_compute[249229]: 2026-01-23 10:20:24.774 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "73e0ca50-1f94-47c9-afce-7591b733d68d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:24 compute-0 nova_compute[249229]: 2026-01-23 10:20:24.774 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:24 compute-0 nova_compute[249229]: 2026-01-23 10:20:24.796 249233 DEBUG nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 10:20:24 compute-0 nova_compute[249229]: 2026-01-23 10:20:24.874 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:24 compute-0 nova_compute[249229]: 2026-01-23 10:20:24.876 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:24 compute-0 nova_compute[249229]: 2026-01-23 10:20:24.885 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 10:20:24 compute-0 nova_compute[249229]: 2026-01-23 10:20:24.886 249233 INFO nova.compute.claims [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Claim successful on node compute-0.ctlplane.example.com
Jan 23 10:20:24 compute-0 nova_compute[249229]: 2026-01-23 10:20:24.969 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.041 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:25 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c80013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:25 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c80013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:20:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4205134929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.412 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.418 249233 DEBUG nova.compute.provider_tree [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.437 249233 DEBUG nova.scheduler.client.report [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.458 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.459 249233 DEBUG nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.500 249233 DEBUG nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.501 249233 DEBUG nova.network.neutron [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.522 249233 INFO nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.538 249233 DEBUG nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 10:20:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:25 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a8003500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.638 249233 DEBUG nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.639 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.639 249233 INFO nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Creating image(s)
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.667 249233 DEBUG nova.storage.rbd_utils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 73e0ca50-1f94-47c9-afce-7591b733d68d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:20:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.696 249233 DEBUG nova.storage.rbd_utils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 73e0ca50-1f94-47c9-afce-7591b733d68d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.721 249233 DEBUG nova.storage.rbd_utils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 73e0ca50-1f94-47c9-afce-7591b733d68d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:20:25 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:25 compute-0 ceph-mon[74335]: pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 15 KiB/s wr, 0 op/s
Jan 23 10:20:25 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:25 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4205134929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.725 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.787 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.787 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "379b2821245bc82aa5a95839eddb9a97716b559c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.788 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "379b2821245bc82aa5a95839eddb9a97716b559c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.789 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "379b2821245bc82aa5a95839eddb9a97716b559c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.815 249233 DEBUG nova.storage.rbd_utils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 73e0ca50-1f94-47c9-afce-7591b733d68d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:20:25 compute-0 nova_compute[249229]: 2026-01-23 10:20:25.822 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c 73e0ca50-1f94-47c9-afce-7591b733d68d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:25.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:26 compute-0 nova_compute[249229]: 2026-01-23 10:20:26.500 249233 DEBUG nova.policy [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f459c4e71e6c47acb0f8aaf83f34695e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 23 10:20:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 1 op/s
Jan 23 10:20:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:26.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:20:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:27 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:27 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c80013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:27 compute-0 nova_compute[249229]: 2026-01-23 10:20:27.510 249233 DEBUG nova.network.neutron [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Successfully created port: a471e230-aa12-49dc-959a-7630183c2e5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 23 10:20:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:27 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c80013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:27 compute-0 nova_compute[249229]: 2026-01-23 10:20:27.676 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:27.761Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:27 compute-0 ceph-mon[74335]: pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 1 op/s
Jan 23 10:20:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:27.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:20:28 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:20:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:20:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:20:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:20:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:20:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:28.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.030 249233 DEBUG nova.network.neutron [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Successfully updated port: a471e230-aa12-49dc-959a-7630183c2e5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.045 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "refresh_cache-73e0ca50-1f94-47c9-afce-7591b733d68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.045 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquired lock "refresh_cache-73e0ca50-1f94-47c9-afce-7591b733d68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.045 249233 DEBUG nova.network.neutron [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 10:20:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.161 249233 DEBUG nova.compute.manager [req-238b76bd-cbc0-469a-9ae0-679f8c46e689 req-cb89ec70-fbcd-4735-a85f-cc08fa6af2a5 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received event network-changed-a471e230-aa12-49dc-959a-7630183c2e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.161 249233 DEBUG nova.compute.manager [req-238b76bd-cbc0-469a-9ae0-679f8c46e689 req-cb89ec70-fbcd-4735-a85f-cc08fa6af2a5 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Refreshing instance network info cache due to event network-changed-a471e230-aa12-49dc-959a-7630183c2e5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.161 249233 DEBUG oslo_concurrency.lockutils [req-238b76bd-cbc0-469a-9ae0-679f8c46e689 req-cb89ec70-fbcd-4735-a85f-cc08fa6af2a5 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-73e0ca50-1f94-47c9-afce-7591b733d68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:20:29 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:29 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:29 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:20:29 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:20:29 compute-0 ceph-mon[74335]: pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.327 249233 DEBUG nova.network.neutron [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 10:20:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:29 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:29 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:20:29 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:20:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:20:29 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:20:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:20:29 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.420 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c 73e0ca50-1f94-47c9-afce-7591b733d68d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:29 compute-0 sudo[261401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:20:29 compute-0 sudo[261401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:29 compute-0 sudo[261401]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:29 compute-0 nova_compute[249229]: 2026-01-23 10:20:29.511 249233 DEBUG nova.storage.rbd_utils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] resizing rbd image 73e0ca50-1f94-47c9-afce-7591b733d68d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 23 10:20:29 compute-0 sudo[261459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:20:29 compute-0 sudo[261459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:29 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:29.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:29 compute-0 podman[261546]: 2026-01-23 10:20:29.956749692 +0000 UTC m=+0.042022386 container create ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_blackburn, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 10:20:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:29] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:20:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:29] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:20:29 compute-0 systemd[1]: Started libpod-conmon-ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42.scope.
Jan 23 10:20:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:20:30 compute-0 podman[261546]: 2026-01-23 10:20:30.032567444 +0000 UTC m=+0.117840168 container init ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_blackburn, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:20:30 compute-0 podman[261546]: 2026-01-23 10:20:29.938292248 +0000 UTC m=+0.023564962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:20:30 compute-0 podman[261546]: 2026-01-23 10:20:30.040296247 +0000 UTC m=+0.125568961 container start ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_blackburn, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:20:30 compute-0 ecstatic_blackburn[261562]: 167 167
Jan 23 10:20:30 compute-0 podman[261546]: 2026-01-23 10:20:30.044598612 +0000 UTC m=+0.129871326 container attach ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 10:20:30 compute-0 systemd[1]: libpod-ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42.scope: Deactivated successfully.
Jan 23 10:20:30 compute-0 conmon[261562]: conmon ce837009f8c78a99bb47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42.scope/container/memory.events
Jan 23 10:20:30 compute-0 podman[261546]: 2026-01-23 10:20:30.046502177 +0000 UTC m=+0.131774881 container died ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.047 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bacef05c14ca3c10dfdc0a28f5e4111b3c0f31cee9b8773bd82f13e536c86e7-merged.mount: Deactivated successfully.
Jan 23 10:20:30 compute-0 podman[261546]: 2026-01-23 10:20:30.087635306 +0000 UTC m=+0.172908020 container remove ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 10:20:30 compute-0 systemd[1]: libpod-conmon-ce837009f8c78a99bb4776c57ceeea4a2ea245547d7e15fb2669a29d7bfd8a42.scope: Deactivated successfully.
Jan 23 10:20:30 compute-0 podman[261586]: 2026-01-23 10:20:30.239748534 +0000 UTC m=+0.038942667 container create ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:20:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:20:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:20:30 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:20:30 compute-0 systemd[1]: Started libpod-conmon-ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38.scope.
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.285 249233 DEBUG nova.objects.instance [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'migration_context' on Instance uuid 73e0ca50-1f94-47c9-afce-7591b733d68d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.297 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.297 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Ensure instance console log exists: /var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.298 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.298 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.299 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8061146add6998e7a49e4567c8ca8c20cebf45ebcdea465189a3b6672d26bd36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8061146add6998e7a49e4567c8ca8c20cebf45ebcdea465189a3b6672d26bd36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8061146add6998e7a49e4567c8ca8c20cebf45ebcdea465189a3b6672d26bd36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8061146add6998e7a49e4567c8ca8c20cebf45ebcdea465189a3b6672d26bd36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8061146add6998e7a49e4567c8ca8c20cebf45ebcdea465189a3b6672d26bd36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:30 compute-0 podman[261586]: 2026-01-23 10:20:30.223776563 +0000 UTC m=+0.022970716 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:20:30 compute-0 podman[261586]: 2026-01-23 10:20:30.322209369 +0000 UTC m=+0.121403522 container init ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 10:20:30 compute-0 podman[261586]: 2026-01-23 10:20:30.331450806 +0000 UTC m=+0.130644939 container start ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:20:30 compute-0 podman[261586]: 2026-01-23 10:20:30.334636048 +0000 UTC m=+0.133830211 container attach ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 10:20:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:20:30 compute-0 modest_leavitt[261621]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:20:30 compute-0 modest_leavitt[261621]: --> All data devices are unavailable
Jan 23 10:20:30 compute-0 systemd[1]: libpod-ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38.scope: Deactivated successfully.
Jan 23 10:20:30 compute-0 conmon[261621]: conmon ea856e0e42128995964d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38.scope/container/memory.events
Jan 23 10:20:30 compute-0 podman[261586]: 2026-01-23 10:20:30.676444111 +0000 UTC m=+0.475638244 container died ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:20:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8061146add6998e7a49e4567c8ca8c20cebf45ebcdea465189a3b6672d26bd36-merged.mount: Deactivated successfully.
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.710 249233 DEBUG nova.network.neutron [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Updating instance_info_cache with network_info: [{"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:20:30 compute-0 podman[261586]: 2026-01-23 10:20:30.717262481 +0000 UTC m=+0.516456614 container remove ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 10:20:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:30.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:30 compute-0 systemd[1]: libpod-conmon-ea856e0e42128995964d3acfafc4d8b8be4898937271ef6f006fd9f42569bf38.scope: Deactivated successfully.
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.735 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Releasing lock "refresh_cache-73e0ca50-1f94-47c9-afce-7591b733d68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.736 249233 DEBUG nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Instance network_info: |[{"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.736 249233 DEBUG oslo_concurrency.lockutils [req-238b76bd-cbc0-469a-9ae0-679f8c46e689 req-cb89ec70-fbcd-4735-a85f-cc08fa6af2a5 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-73e0ca50-1f94-47c9-afce-7591b733d68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.737 249233 DEBUG nova.network.neutron [req-238b76bd-cbc0-469a-9ae0-679f8c46e689 req-cb89ec70-fbcd-4735-a85f-cc08fa6af2a5 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Refreshing network info cache for port a471e230-aa12-49dc-959a-7630183c2e5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.739 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Start _get_guest_xml network_info=[{"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T10:15:36Z,direct_url=<?>,disk_format='qcow2',id=271ec98e-d058-421b-bbfb-4b4a5954c90a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5220cd4f58cb43bb899e367e961bc5c1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T10:15:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'size': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '271ec98e-d058-421b-bbfb-4b4a5954c90a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.744 249233 WARNING nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.748 249233 DEBUG nova.virt.libvirt.host [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.750 249233 DEBUG nova.virt.libvirt.host [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.754 249233 DEBUG nova.virt.libvirt.host [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.755 249233 DEBUG nova.virt.libvirt.host [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.755 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.755 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T10:15:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1d8c8bf4-786e-4009-bc53-f259480fb5b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T10:15:36Z,direct_url=<?>,disk_format='qcow2',id=271ec98e-d058-421b-bbfb-4b4a5954c90a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5220cd4f58cb43bb899e367e961bc5c1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T10:15:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.756 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.756 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.756 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.756 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.757 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.757 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.757 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.757 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.757 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.758 249233 DEBUG nova.virt.hardware [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 10:20:30 compute-0 nova_compute[249229]: 2026-01-23 10:20:30.760 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:30 compute-0 sudo[261459]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:30 compute-0 sudo[261650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:20:30 compute-0 sudo[261650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:30 compute-0 sudo[261650]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:30 compute-0 sudo[261675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:20:30 compute-0 sudo[261675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 10:20:31 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/733879018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.218 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.253 249233 DEBUG nova.storage.rbd_utils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 73e0ca50-1f94-47c9-afce-7591b733d68d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.263 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:31 compute-0 ceph-mon[74335]: pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 23 10:20:31 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/733879018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:20:31 compute-0 podman[261782]: 2026-01-23 10:20:31.318027301 +0000 UTC m=+0.047677029 container create f0fbebe9f0891e237d9bf8b0a35befd2cf3c09a51ebe19a8df228d0def510573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 10:20:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:31 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:31 compute-0 systemd[1]: Started libpod-conmon-f0fbebe9f0891e237d9bf8b0a35befd2cf3c09a51ebe19a8df228d0def510573.scope.
Jan 23 10:20:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:31 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:20:31 compute-0 podman[261782]: 2026-01-23 10:20:31.297768116 +0000 UTC m=+0.027417854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:20:31 compute-0 podman[261782]: 2026-01-23 10:20:31.407045805 +0000 UTC m=+0.136695543 container init f0fbebe9f0891e237d9bf8b0a35befd2cf3c09a51ebe19a8df228d0def510573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_chatterjee, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:20:31 compute-0 podman[261782]: 2026-01-23 10:20:31.416911731 +0000 UTC m=+0.146561459 container start f0fbebe9f0891e237d9bf8b0a35befd2cf3c09a51ebe19a8df228d0def510573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_chatterjee, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:20:31 compute-0 podman[261782]: 2026-01-23 10:20:31.420744841 +0000 UTC m=+0.150394599 container attach f0fbebe9f0891e237d9bf8b0a35befd2cf3c09a51ebe19a8df228d0def510573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:20:31 compute-0 elated_chatterjee[261799]: 167 167
Jan 23 10:20:31 compute-0 systemd[1]: libpod-f0fbebe9f0891e237d9bf8b0a35befd2cf3c09a51ebe19a8df228d0def510573.scope: Deactivated successfully.
Jan 23 10:20:31 compute-0 podman[261782]: 2026-01-23 10:20:31.424239693 +0000 UTC m=+0.153889431 container died f0fbebe9f0891e237d9bf8b0a35befd2cf3c09a51ebe19a8df228d0def510573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3efdc397f5ad6cf67c813544e07f968f2db6245454b051d903173eaf04462a71-merged.mount: Deactivated successfully.
Jan 23 10:20:31 compute-0 podman[261782]: 2026-01-23 10:20:31.457489004 +0000 UTC m=+0.187138732 container remove f0fbebe9f0891e237d9bf8b0a35befd2cf3c09a51ebe19a8df228d0def510573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_chatterjee, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 10:20:31 compute-0 systemd[1]: libpod-conmon-f0fbebe9f0891e237d9bf8b0a35befd2cf3c09a51ebe19a8df228d0def510573.scope: Deactivated successfully.
Jan 23 10:20:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:31 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:31 compute-0 podman[261841]: 2026-01-23 10:20:31.60606442 +0000 UTC m=+0.041710867 container create 7ae6bb326e4e420d7ef3019dbcd5fbfb50b4b0d77d7f80ed0312c184658c42a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 10:20:31 compute-0 systemd[1]: Started libpod-conmon-7ae6bb326e4e420d7ef3019dbcd5fbfb50b4b0d77d7f80ed0312c184658c42a8.scope.
Jan 23 10:20:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a896de77a7bb6c86d7d80559c09e55b7610fcc4ef4c3488032fddaddbeb0a2b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a896de77a7bb6c86d7d80559c09e55b7610fcc4ef4c3488032fddaddbeb0a2b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a896de77a7bb6c86d7d80559c09e55b7610fcc4ef4c3488032fddaddbeb0a2b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a896de77a7bb6c86d7d80559c09e55b7610fcc4ef4c3488032fddaddbeb0a2b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:31 compute-0 podman[261841]: 2026-01-23 10:20:31.586841894 +0000 UTC m=+0.022488351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:20:31 compute-0 podman[261841]: 2026-01-23 10:20:31.683141898 +0000 UTC m=+0.118788355 container init 7ae6bb326e4e420d7ef3019dbcd5fbfb50b4b0d77d7f80ed0312c184658c42a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_kapitsa, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:20:31 compute-0 podman[261841]: 2026-01-23 10:20:31.692070737 +0000 UTC m=+0.127717184 container start 7ae6bb326e4e420d7ef3019dbcd5fbfb50b4b0d77d7f80ed0312c184658c42a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_kapitsa, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:20:31 compute-0 podman[261841]: 2026-01-23 10:20:31.695216587 +0000 UTC m=+0.130863074 container attach 7ae6bb326e4e420d7ef3019dbcd5fbfb50b4b0d77d7f80ed0312c184658c42a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_kapitsa, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:20:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 10:20:31 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524953951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.762 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.766 249233 DEBUG nova.virt.libvirt.vif [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:20:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-646374583',display_name='tempest-TestNetworkBasicOps-server-646374583',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-646374583',id=5,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDt/A8rkqa34y9MBXMhsynMg1qj7m+Ndzad2AKhU4utcMwCBXtgQgdTx8HvqmCksU/dEcK+Ccws9jQ2U2f6VAlc3FD8bbqobwsuDEt0sD3tDzFEsYGAiF5NXnjtkI+37ug==',key_name='tempest-TestNetworkBasicOps-125336857',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-s8nsgdou',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:20:25Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=73e0ca50-1f94-47c9-afce-7591b733d68d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.766 249233 DEBUG nova.network.os_vif_util [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.767 249233 DEBUG nova.network.os_vif_util [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:ce:c8,bridge_name='br-int',has_traffic_filtering=True,id=a471e230-aa12-49dc-959a-7630183c2e5a,network=Network(13f52d23-9898-43a0-a951-b69cb2abebab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa471e230-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.769 249233 DEBUG nova.objects.instance [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73e0ca50-1f94-47c9-afce-7591b733d68d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.782 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] End _get_guest_xml xml=<domain type="kvm">
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <uuid>73e0ca50-1f94-47c9-afce-7591b733d68d</uuid>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <name>instance-00000005</name>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <memory>131072</memory>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <vcpu>1</vcpu>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <metadata>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <nova:name>tempest-TestNetworkBasicOps-server-646374583</nova:name>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <nova:creationTime>2026-01-23 10:20:30</nova:creationTime>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <nova:flavor name="m1.nano">
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <nova:memory>128</nova:memory>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <nova:disk>1</nova:disk>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <nova:swap>0</nova:swap>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <nova:ephemeral>0</nova:ephemeral>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <nova:vcpus>1</nova:vcpus>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       </nova:flavor>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <nova:owner>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <nova:user uuid="f459c4e71e6c47acb0f8aaf83f34695e">tempest-TestNetworkBasicOps-655467240-project-member</nova:user>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <nova:project uuid="acc90003f0f7412b8daf8a1b6f0f1494">tempest-TestNetworkBasicOps-655467240</nova:project>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       </nova:owner>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <nova:root type="image" uuid="271ec98e-d058-421b-bbfb-4b4a5954c90a"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <nova:ports>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <nova:port uuid="a471e230-aa12-49dc-959a-7630183c2e5a">
Jan 23 10:20:31 compute-0 nova_compute[249229]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         </nova:port>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       </nova:ports>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     </nova:instance>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   </metadata>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <sysinfo type="smbios">
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <system>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <entry name="manufacturer">RDO</entry>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <entry name="product">OpenStack Compute</entry>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <entry name="serial">73e0ca50-1f94-47c9-afce-7591b733d68d</entry>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <entry name="uuid">73e0ca50-1f94-47c9-afce-7591b733d68d</entry>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <entry name="family">Virtual Machine</entry>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     </system>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   </sysinfo>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <os>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <boot dev="hd"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <smbios mode="sysinfo"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   </os>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <features>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <acpi/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <apic/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <vmcoreinfo/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   </features>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <clock offset="utc">
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <timer name="hpet" present="no"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   </clock>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <cpu mode="host-model" match="exact">
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   </cpu>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   <devices>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <disk type="network" device="disk">
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <driver type="raw" cache="none"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <source protocol="rbd" name="vms/73e0ca50-1f94-47c9-afce-7591b733d68d_disk">
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <host name="192.168.122.100" port="6789"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <host name="192.168.122.102" port="6789"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <host name="192.168.122.101" port="6789"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       </source>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <auth username="openstack">
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <secret type="ceph" uuid="f3005f84-239a-55b6-a948-8f1fb592b920"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       </auth>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <target dev="vda" bus="virtio"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <disk type="network" device="cdrom">
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <driver type="raw" cache="none"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <source protocol="rbd" name="vms/73e0ca50-1f94-47c9-afce-7591b733d68d_disk.config">
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <host name="192.168.122.100" port="6789"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <host name="192.168.122.102" port="6789"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <host name="192.168.122.101" port="6789"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       </source>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <auth username="openstack">
Jan 23 10:20:31 compute-0 nova_compute[249229]:         <secret type="ceph" uuid="f3005f84-239a-55b6-a948-8f1fb592b920"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       </auth>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <target dev="sda" bus="sata"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <interface type="ethernet">
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <mac address="fa:16:3e:ab:ce:c8"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <model type="virtio"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <mtu size="1442"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <target dev="tapa471e230-aa"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     </interface>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <serial type="pty">
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <log file="/var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d/console.log" append="off"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     </serial>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <video>
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <model type="virtio"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     </video>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <input type="tablet" bus="usb"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <rng model="virtio">
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <backend model="random">/dev/urandom</backend>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     </rng>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <controller type="usb" index="0"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     <memballoon model="virtio">
Jan 23 10:20:31 compute-0 nova_compute[249229]:       <stats period="10"/>
Jan 23 10:20:31 compute-0 nova_compute[249229]:     </memballoon>
Jan 23 10:20:31 compute-0 nova_compute[249229]:   </devices>
Jan 23 10:20:31 compute-0 nova_compute[249229]: </domain>
Jan 23 10:20:31 compute-0 nova_compute[249229]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.783 249233 DEBUG nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Preparing to wait for external event network-vif-plugged-a471e230-aa12-49dc-959a-7630183c2e5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.784 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.784 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.784 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.785 249233 DEBUG nova.virt.libvirt.vif [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:20:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-646374583',display_name='tempest-TestNetworkBasicOps-server-646374583',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-646374583',id=5,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDt/A8rkqa34y9MBXMhsynMg1qj7m+Ndzad2AKhU4utcMwCBXtgQgdTx8HvqmCksU/dEcK+Ccws9jQ2U2f6VAlc3FD8bbqobwsuDEt0sD3tDzFEsYGAiF5NXnjtkI+37ug==',key_name='tempest-TestNetworkBasicOps-125336857',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-s8nsgdou',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:20:25Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=73e0ca50-1f94-47c9-afce-7591b733d68d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.785 249233 DEBUG nova.network.os_vif_util [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.786 249233 DEBUG nova.network.os_vif_util [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:ce:c8,bridge_name='br-int',has_traffic_filtering=True,id=a471e230-aa12-49dc-959a-7630183c2e5a,network=Network(13f52d23-9898-43a0-a951-b69cb2abebab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa471e230-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.787 249233 DEBUG os_vif [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:ce:c8,bridge_name='br-int',has_traffic_filtering=True,id=a471e230-aa12-49dc-959a-7630183c2e5a,network=Network(13f52d23-9898-43a0-a951-b69cb2abebab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa471e230-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.787 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.788 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.789 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.795 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.796 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa471e230-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.796 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa471e230-aa, col_values=(('external_ids', {'iface-id': 'a471e230-aa12-49dc-959a-7630183c2e5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:ce:c8', 'vm-uuid': '73e0ca50-1f94-47c9-afce-7591b733d68d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.798 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:31 compute-0 NetworkManager[48866]: <info>  [1769163631.8000] manager: (tapa471e230-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.800 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:20:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.805 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.806 249233 INFO os_vif [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:ce:c8,bridge_name='br-int',has_traffic_filtering=True,id=a471e230-aa12-49dc-959a-7630183c2e5a,network=Network(13f52d23-9898-43a0-a951-b69cb2abebab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa471e230-aa')
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.864 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.864 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.865 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No VIF found with MAC fa:16:3e:ab:ce:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.865 249233 INFO nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Using config drive
Jan 23 10:20:31 compute-0 nova_compute[249229]: 2026-01-23 10:20:31.900 249233 DEBUG nova.storage.rbd_utils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 73e0ca50-1f94-47c9-afce-7591b733d68d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:20:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:31.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]: {
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:     "1": [
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:         {
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "devices": [
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "/dev/loop3"
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             ],
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "lv_name": "ceph_lv0",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "lv_size": "21470642176",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "name": "ceph_lv0",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "tags": {
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.cluster_name": "ceph",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.crush_device_class": "",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.encrypted": "0",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.osd_id": "1",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.type": "block",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.vdo": "0",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:                 "ceph.with_tpm": "0"
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             },
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "type": "block",
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:             "vg_name": "ceph_vg0"
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:         }
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]:     ]
Jan 23 10:20:31 compute-0 inspiring_kapitsa[261857]: }
Jan 23 10:20:31 compute-0 systemd[1]: libpod-7ae6bb326e4e420d7ef3019dbcd5fbfb50b4b0d77d7f80ed0312c184658c42a8.scope: Deactivated successfully.
Jan 23 10:20:31 compute-0 podman[261841]: 2026-01-23 10:20:31.989865706 +0000 UTC m=+0.425512153 container died 7ae6bb326e4e420d7ef3019dbcd5fbfb50b4b0d77d7f80ed0312c184658c42a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 10:20:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a896de77a7bb6c86d7d80559c09e55b7610fcc4ef4c3488032fddaddbeb0a2b8-merged.mount: Deactivated successfully.
Jan 23 10:20:32 compute-0 podman[261841]: 2026-01-23 10:20:32.037237696 +0000 UTC m=+0.472884153 container remove 7ae6bb326e4e420d7ef3019dbcd5fbfb50b4b0d77d7f80ed0312c184658c42a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_kapitsa, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:20:32 compute-0 systemd[1]: libpod-conmon-7ae6bb326e4e420d7ef3019dbcd5fbfb50b4b0d77d7f80ed0312c184658c42a8.scope: Deactivated successfully.
Jan 23 10:20:32 compute-0 sudo[261675]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.093 249233 DEBUG nova.network.neutron [req-238b76bd-cbc0-469a-9ae0-679f8c46e689 req-cb89ec70-fbcd-4735-a85f-cc08fa6af2a5 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Updated VIF entry in instance network info cache for port a471e230-aa12-49dc-959a-7630183c2e5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.094 249233 DEBUG nova.network.neutron [req-238b76bd-cbc0-469a-9ae0-679f8c46e689 req-cb89ec70-fbcd-4735-a85f-cc08fa6af2a5 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Updating instance_info_cache with network_info: [{"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.109 249233 DEBUG oslo_concurrency.lockutils [req-238b76bd-cbc0-469a-9ae0-679f8c46e689 req-cb89ec70-fbcd-4735-a85f-cc08fa6af2a5 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-73e0ca50-1f94-47c9-afce-7591b733d68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:20:32 compute-0 sudo[261901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:20:32 compute-0 sudo[261901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:32 compute-0 sudo[261901]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:32 compute-0 sudo[261926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:20:32 compute-0 sudo[261926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1524953951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.472 249233 INFO nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Creating config drive at /var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d/disk.config
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.476 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rfw6_xd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:32 compute-0 podman[261993]: 2026-01-23 10:20:32.606105104 +0000 UTC m=+0.039852614 container create 806a2849bbe3f1a43badced9cfdc1d4d11ab3ad1a514d1a36b38b3c5356edbb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 10:20:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.609 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rfw6_xd" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:32 compute-0 systemd[1]: Started libpod-conmon-806a2849bbe3f1a43badced9cfdc1d4d11ab3ad1a514d1a36b38b3c5356edbb1.scope.
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.639 249233 DEBUG nova.storage.rbd_utils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 73e0ca50-1f94-47c9-afce-7591b733d68d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.645 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d/disk.config 73e0ca50-1f94-47c9-afce-7591b733d68d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:20:32 compute-0 podman[261993]: 2026-01-23 10:20:32.590121482 +0000 UTC m=+0.023869042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.715 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:32.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:32 compute-0 podman[261993]: 2026-01-23 10:20:32.724354763 +0000 UTC m=+0.158102273 container init 806a2849bbe3f1a43badced9cfdc1d4d11ab3ad1a514d1a36b38b3c5356edbb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feynman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:20:32 compute-0 podman[261993]: 2026-01-23 10:20:32.731572451 +0000 UTC m=+0.165319961 container start 806a2849bbe3f1a43badced9cfdc1d4d11ab3ad1a514d1a36b38b3c5356edbb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feynman, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 10:20:32 compute-0 podman[261993]: 2026-01-23 10:20:32.734065794 +0000 UTC m=+0.167813304 container attach 806a2849bbe3f1a43badced9cfdc1d4d11ab3ad1a514d1a36b38b3c5356edbb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feynman, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:20:32 compute-0 hardcore_feynman[262027]: 167 167
Jan 23 10:20:32 compute-0 systemd[1]: libpod-806a2849bbe3f1a43badced9cfdc1d4d11ab3ad1a514d1a36b38b3c5356edbb1.scope: Deactivated successfully.
Jan 23 10:20:32 compute-0 podman[261993]: 2026-01-23 10:20:32.735767673 +0000 UTC m=+0.169515183 container died 806a2849bbe3f1a43badced9cfdc1d4d11ab3ad1a514d1a36b38b3c5356edbb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:20:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-923f22d0f3e6df56b703a06257ad134bba6b491d32419665e24044b58c6db7ed-merged.mount: Deactivated successfully.
Jan 23 10:20:32 compute-0 podman[261993]: 2026-01-23 10:20:32.769779846 +0000 UTC m=+0.203527356 container remove 806a2849bbe3f1a43badced9cfdc1d4d11ab3ad1a514d1a36b38b3c5356edbb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feynman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:20:32 compute-0 systemd[1]: libpod-conmon-806a2849bbe3f1a43badced9cfdc1d4d11ab3ad1a514d1a36b38b3c5356edbb1.scope: Deactivated successfully.
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.835 249233 DEBUG oslo_concurrency.processutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d/disk.config 73e0ca50-1f94-47c9-afce-7591b733d68d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.836 249233 INFO nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Deleting local config drive /var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d/disk.config because it was imported into RBD.
Jan 23 10:20:32 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 23 10:20:32 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 23 10:20:32 compute-0 podman[262070]: 2026-01-23 10:20:32.918826836 +0000 UTC m=+0.040919054 container create abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 10:20:32 compute-0 kernel: tapa471e230-aa: entered promiscuous mode
Jan 23 10:20:32 compute-0 NetworkManager[48866]: <info>  [1769163632.9512] manager: (tapa471e230-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 23 10:20:32 compute-0 ovn_controller[151634]: 2026-01-23T10:20:32Z|00039|binding|INFO|Claiming lport a471e230-aa12-49dc-959a-7630183c2e5a for this chassis.
Jan 23 10:20:32 compute-0 ovn_controller[151634]: 2026-01-23T10:20:32Z|00040|binding|INFO|a471e230-aa12-49dc-959a-7630183c2e5a: Claiming fa:16:3e:ab:ce:c8 10.100.0.6
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.957 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:32 compute-0 nova_compute[249229]: 2026-01-23 10:20:32.962 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:32 compute-0 systemd[1]: Started libpod-conmon-abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7.scope.
Jan 23 10:20:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:32.985 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:ce:c8 10.100.0.6'], port_security=['fa:16:3e:ab:ce:c8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '73e0ca50-1f94-47c9-afce-7591b733d68d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13f52d23-9898-43a0-a951-b69cb2abebab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fe44e39-9cd5-4125-8e95-de3941586911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30316727-f942-4d99-94ec-26d1184b5c8a, chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], logical_port=a471e230-aa12-49dc-959a-7630183c2e5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:20:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:32.986 161921 INFO neutron.agent.ovn.metadata.agent [-] Port a471e230-aa12-49dc-959a-7630183c2e5a in datapath 13f52d23-9898-43a0-a951-b69cb2abebab bound to our chassis
Jan 23 10:20:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:32.987 161921 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13f52d23-9898-43a0-a951-b69cb2abebab
Jan 23 10:20:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:20:32 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Jan 23 10:20:32 compute-0 systemd-machined[216411]: New machine qemu-2-instance-00000005.
Jan 23 10:20:32 compute-0 podman[262070]: 2026-01-23 10:20:32.900428814 +0000 UTC m=+0.022521102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87a4ae9ce9fd2dcfc19ee7521f4502b111b770c58bb8112490deb5e590b1a3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87a4ae9ce9fd2dcfc19ee7521f4502b111b770c58bb8112490deb5e590b1a3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87a4ae9ce9fd2dcfc19ee7521f4502b111b770c58bb8112490deb5e590b1a3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87a4ae9ce9fd2dcfc19ee7521f4502b111b770c58bb8112490deb5e590b1a3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:33 compute-0 systemd-udevd[262121]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.007 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[c282661d-3fd8-43aa-be8b-9504411a789e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.009 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap13f52d23-91 in ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 23 10:20:33 compute-0 podman[262070]: 2026-01-23 10:20:33.01650151 +0000 UTC m=+0.138593768 container init abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.014 255218 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap13f52d23-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.014 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2f39cb-d7e2-4d36-83a2-968cb097d211]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.017 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[5918fc8e-9eb6-42d6-961e-74cee1f4f9c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 NetworkManager[48866]: <info>  [1769163633.0205] device (tapa471e230-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 10:20:33 compute-0 NetworkManager[48866]: <info>  [1769163633.0215] device (tapa471e230-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 10:20:33 compute-0 podman[262070]: 2026-01-23 10:20:33.029212037 +0000 UTC m=+0.151304265 container start abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.030 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[64f7a698-f51b-4888-a14c-129a119ea17b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 podman[262070]: 2026-01-23 10:20:33.033127071 +0000 UTC m=+0.155219299 container attach abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.038 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:33 compute-0 ovn_controller[151634]: 2026-01-23T10:20:33Z|00041|binding|INFO|Setting lport a471e230-aa12-49dc-959a-7630183c2e5a ovn-installed in OVS
Jan 23 10:20:33 compute-0 ovn_controller[151634]: 2026-01-23T10:20:33Z|00042|binding|INFO|Setting lport a471e230-aa12-49dc-959a-7630183c2e5a up in Southbound
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.044 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.048 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb072eb-f882-49fc-8d30-7d929663349b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.079 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[1a376fc6-2027-47db-a419-4ab818867642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 systemd-udevd[262125]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.087 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff8ae50-1867-4921-aee0-2744b37e0c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 NetworkManager[48866]: <info>  [1769163633.0890] manager: (tap13f52d23-90): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.129 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[e29a586c-3527-423e-b36a-321409407b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.132 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[04313a3c-07f0-4a5a-9d1c-527426b98558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 NetworkManager[48866]: <info>  [1769163633.1548] device (tap13f52d23-90): carrier: link connected
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.163 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[b6430d80-2514-400f-a0b7-3860152182cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.189 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[5df28b53-32ca-4c59-a2e7-5c21b1e76ec4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13f52d23-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:4e:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475468, 'reachable_time': 19827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262155, 'error': None, 'target': 'ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.212 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf20d10-00cc-45fa-a5d6-33076019f00c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:4e18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475468, 'tstamp': 475468}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262157, 'error': None, 'target': 'ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.231 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0fd991-5fcf-416b-a8ed-2f9d0e9b0352]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13f52d23-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:4e:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475468, 'reachable_time': 19827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262163, 'error': None, 'target': 'ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.268 249233 DEBUG nova.compute.manager [req-25037cd3-50ce-4cd0-a703-dd157c92db91 req-6a9f1913-ee0b-4bde-b2a9-bddb05d1d97c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received event network-vif-plugged-a471e230-aa12-49dc-959a-7630183c2e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.268 249233 DEBUG oslo_concurrency.lockutils [req-25037cd3-50ce-4cd0-a703-dd157c92db91 req-6a9f1913-ee0b-4bde-b2a9-bddb05d1d97c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.269 249233 DEBUG oslo_concurrency.lockutils [req-25037cd3-50ce-4cd0-a703-dd157c92db91 req-6a9f1913-ee0b-4bde-b2a9-bddb05d1d97c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.269 249233 DEBUG oslo_concurrency.lockutils [req-25037cd3-50ce-4cd0-a703-dd157c92db91 req-6a9f1913-ee0b-4bde-b2a9-bddb05d1d97c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.270 249233 DEBUG nova.compute.manager [req-25037cd3-50ce-4cd0-a703-dd157c92db91 req-6a9f1913-ee0b-4bde-b2a9-bddb05d1d97c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Processing event network-vif-plugged-a471e230-aa12-49dc-959a-7630183c2e5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.281 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[a016ba1a-5ea5-401e-a976-dc69019d9c1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:33 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.362 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffafea2-666e-49f7-acd6-eb1edefd59a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.365 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13f52d23-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.365 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.365 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13f52d23-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:20:33 compute-0 NetworkManager[48866]: <info>  [1769163633.3681] manager: (tap13f52d23-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.367 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:33 compute-0 kernel: tap13f52d23-90: entered promiscuous mode
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.371 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13f52d23-90, col_values=(('external_ids', {'iface-id': 'bc964dd7-f70f-4a1c-80bd-e1f6bfbe809a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.372 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:33 compute-0 ovn_controller[151634]: 2026-01-23T10:20:33Z|00043|binding|INFO|Releasing lport bc964dd7-f70f-4a1c-80bd-e1f6bfbe809a from this chassis (sb_readonly=0)
Jan 23 10:20:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:33 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.387 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.389 161921 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/13f52d23-9898-43a0-a951-b69cb2abebab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/13f52d23-9898-43a0-a951-b69cb2abebab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 23 10:20:33 compute-0 ceph-mon[74335]: pgmap v818: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.392 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a81010-ab34-4369-943c-fb5e3e7841fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.393 161921 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: global
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     log         /dev/log local0 debug
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     log-tag     haproxy-metadata-proxy-13f52d23-9898-43a0-a951-b69cb2abebab
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     user        root
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     group       root
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     maxconn     1024
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     pidfile     /var/lib/neutron/external/pids/13f52d23-9898-43a0-a951-b69cb2abebab.pid.haproxy
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     daemon
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: defaults
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     log global
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     mode http
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     option httplog
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     option dontlognull
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     option http-server-close
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     option forwardfor
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     retries                 3
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     timeout http-request    30s
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     timeout connect         30s
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     timeout client          32s
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     timeout server          32s
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     timeout http-keep-alive 30s
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: listen listener
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     bind 169.254.169.254:80
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     server metadata /var/lib/neutron/metadata_proxy
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:     http-request add-header X-OVN-Network-ID 13f52d23-9898-43a0-a951-b69cb2abebab
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 23 10:20:33 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:33.394 161921 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab', 'env', 'PROCESS_TAG=haproxy-13f52d23-9898-43a0-a951-b69cb2abebab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/13f52d23-9898-43a0-a951-b69cb2abebab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 23 10:20:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:33 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:33.646Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:20:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:33.649Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:33 compute-0 lvm[262277]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:20:33 compute-0 lvm[262277]: VG ceph_vg0 finished
Jan 23 10:20:33 compute-0 heuristic_herschel[262114]: {}
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.809 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769163633.8076317, 73e0ca50-1f94-47c9-afce-7591b733d68d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.810 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] VM Started (Lifecycle Event)
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.814 249233 DEBUG nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.817 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 10:20:33 compute-0 systemd[1]: libpod-abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7.scope: Deactivated successfully.
Jan 23 10:20:33 compute-0 systemd[1]: libpod-abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7.scope: Consumed 1.287s CPU time.
Jan 23 10:20:33 compute-0 podman[262070]: 2026-01-23 10:20:33.822270438 +0000 UTC m=+0.944362676 container died abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.825 249233 INFO nova.virt.libvirt.driver [-] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Instance spawned successfully.
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.826 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 10:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d87a4ae9ce9fd2dcfc19ee7521f4502b111b770c58bb8112490deb5e590b1a3e-merged.mount: Deactivated successfully.
Jan 23 10:20:33 compute-0 podman[262070]: 2026-01-23 10:20:33.882905941 +0000 UTC m=+1.004998169 container remove abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:20:33 compute-0 podman[262299]: 2026-01-23 10:20:33.893761655 +0000 UTC m=+0.087940834 container create 6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.895 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:20:33 compute-0 systemd[1]: libpod-conmon-abd4b2ed8dfad9331d5118a53f8d792e7a3c4054900d4473ff85aad484b750e7.scope: Deactivated successfully.
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.907 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.908 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.908 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.908 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.908 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.909 249233 DEBUG nova.virt.libvirt.driver [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.919 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 10:20:33 compute-0 podman[262299]: 2026-01-23 10:20:33.845885181 +0000 UTC m=+0.040064390 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 23 10:20:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.957 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.957 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769163633.808543, 73e0ca50-1f94-47c9-afce-7591b733d68d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.957 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] VM Paused (Lifecycle Event)
Jan 23 10:20:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:33.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:33 compute-0 systemd[1]: Started libpod-conmon-6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a.scope.
Jan 23 10:20:33 compute-0 sudo[261926]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.972 249233 INFO nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Took 8.33 seconds to spawn the instance on the hypervisor.
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.972 249233 DEBUG nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.980 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:20:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.984 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769163633.8172975, 73e0ca50-1f94-47c9-afce-7591b733d68d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:20:33 compute-0 nova_compute[249229]: 2026-01-23 10:20:33.984 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] VM Resumed (Lifecycle Event)
Jan 23 10:20:33 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:20:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9014d23db93148d25260a60edba433baf561a23f3c8707f2214177c0dea4cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 10:20:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:34 compute-0 nova_compute[249229]: 2026-01-23 10:20:34.006 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:20:34 compute-0 nova_compute[249229]: 2026-01-23 10:20:34.015 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 10:20:34 compute-0 podman[262299]: 2026-01-23 10:20:34.015589208 +0000 UTC m=+0.209768387 container init 6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:20:34 compute-0 podman[262299]: 2026-01-23 10:20:34.025307869 +0000 UTC m=+0.219487048 container start 6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 10:20:34 compute-0 nova_compute[249229]: 2026-01-23 10:20:34.041 249233 INFO nova.compute.manager [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Took 9.19 seconds to build instance.
Jan 23 10:20:34 compute-0 neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab[262327]: [NOTICE]   (262338) : New worker (262356) forked
Jan 23 10:20:34 compute-0 neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab[262327]: [NOTICE]   (262338) : Loading success.
Jan 23 10:20:34 compute-0 nova_compute[249229]: 2026-01-23 10:20:34.061 249233 DEBUG oslo_concurrency.lockutils [None req-ca9a0272-e77f-4ffa-ba05-e12f7a5af20c f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:34 compute-0 sudo[262330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:20:34 compute-0 sudo[262330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:34 compute-0 sudo[262330]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:20:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:20:35 compute-0 ceph-mon[74335]: pgmap v819: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:20:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:20:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:20:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:35 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:35 compute-0 nova_compute[249229]: 2026-01-23 10:20:35.353 249233 DEBUG nova.compute.manager [req-e3765bfc-adf2-4737-9e2e-e311813e426f req-07addde6-186b-4947-8a79-407823d10b29 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received event network-vif-plugged-a471e230-aa12-49dc-959a-7630183c2e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:20:35 compute-0 nova_compute[249229]: 2026-01-23 10:20:35.353 249233 DEBUG oslo_concurrency.lockutils [req-e3765bfc-adf2-4737-9e2e-e311813e426f req-07addde6-186b-4947-8a79-407823d10b29 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:35 compute-0 nova_compute[249229]: 2026-01-23 10:20:35.353 249233 DEBUG oslo_concurrency.lockutils [req-e3765bfc-adf2-4737-9e2e-e311813e426f req-07addde6-186b-4947-8a79-407823d10b29 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:35 compute-0 nova_compute[249229]: 2026-01-23 10:20:35.353 249233 DEBUG oslo_concurrency.lockutils [req-e3765bfc-adf2-4737-9e2e-e311813e426f req-07addde6-186b-4947-8a79-407823d10b29 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:35 compute-0 nova_compute[249229]: 2026-01-23 10:20:35.354 249233 DEBUG nova.compute.manager [req-e3765bfc-adf2-4737-9e2e-e311813e426f req-07addde6-186b-4947-8a79-407823d10b29 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] No waiting events found dispatching network-vif-plugged-a471e230-aa12-49dc-959a-7630183c2e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:20:35 compute-0 nova_compute[249229]: 2026-01-23 10:20:35.354 249233 WARNING nova.compute.manager [req-e3765bfc-adf2-4737-9e2e-e311813e426f req-07addde6-186b-4947-8a79-407823d10b29 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received unexpected event network-vif-plugged-a471e230-aa12-49dc-959a-7630183c2e5a for instance with vm_state active and task_state None.
Jan 23 10:20:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:35 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:35 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:35.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:20:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:20:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:36 compute-0 nova_compute[249229]: 2026-01-23 10:20:36.800 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:37 compute-0 ceph-mon[74335]: pgmap v820: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:20:37 compute-0 nova_compute[249229]: 2026-01-23 10:20:37.299 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:37 compute-0 ovn_controller[151634]: 2026-01-23T10:20:37Z|00044|binding|INFO|Releasing lport bc964dd7-f70f-4a1c-80bd-e1f6bfbe809a from this chassis (sb_readonly=0)
Jan 23 10:20:37 compute-0 NetworkManager[48866]: <info>  [1769163637.3026] manager: (patch-br-int-to-provnet-995e8c2d-ca55-405c-bf26-97e408875e42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 23 10:20:37 compute-0 NetworkManager[48866]: <info>  [1769163637.3037] manager: (patch-provnet-995e8c2d-ca55-405c-bf26-97e408875e42-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 23 10:20:37 compute-0 ovn_controller[151634]: 2026-01-23T10:20:37Z|00045|binding|INFO|Releasing lport bc964dd7-f70f-4a1c-80bd-e1f6bfbe809a from this chassis (sb_readonly=0)
Jan 23 10:20:37 compute-0 nova_compute[249229]: 2026-01-23 10:20:37.304 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:37 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:37 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:37 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:37 compute-0 nova_compute[249229]: 2026-01-23 10:20:37.622 249233 DEBUG nova.compute.manager [req-96350966-96ad-45dc-a02f-341d8cbdf2f5 req-6f9f107a-2f33-472a-b8fa-7122377ba27b 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received event network-changed-a471e230-aa12-49dc-959a-7630183c2e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:20:37 compute-0 nova_compute[249229]: 2026-01-23 10:20:37.622 249233 DEBUG nova.compute.manager [req-96350966-96ad-45dc-a02f-341d8cbdf2f5 req-6f9f107a-2f33-472a-b8fa-7122377ba27b 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Refreshing instance network info cache due to event network-changed-a471e230-aa12-49dc-959a-7630183c2e5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:20:37 compute-0 nova_compute[249229]: 2026-01-23 10:20:37.623 249233 DEBUG oslo_concurrency.lockutils [req-96350966-96ad-45dc-a02f-341d8cbdf2f5 req-6f9f107a-2f33-472a-b8fa-7122377ba27b 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-73e0ca50-1f94-47c9-afce-7591b733d68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:20:37 compute-0 nova_compute[249229]: 2026-01-23 10:20:37.623 249233 DEBUG oslo_concurrency.lockutils [req-96350966-96ad-45dc-a02f-341d8cbdf2f5 req-6f9f107a-2f33-472a-b8fa-7122377ba27b 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-73e0ca50-1f94-47c9-afce-7591b733d68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:20:37 compute-0 nova_compute[249229]: 2026-01-23 10:20:37.623 249233 DEBUG nova.network.neutron [req-96350966-96ad-45dc-a02f-341d8cbdf2f5 req-6f9f107a-2f33-472a-b8fa-7122377ba27b 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Refreshing network info cache for port a471e230-aa12-49dc-959a-7630183c2e5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:20:37 compute-0 nova_compute[249229]: 2026-01-23 10:20:37.718 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:37.762Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:37.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:20:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:39 compute-0 nova_compute[249229]: 2026-01-23 10:20:39.014 249233 DEBUG nova.network.neutron [req-96350966-96ad-45dc-a02f-341d8cbdf2f5 req-6f9f107a-2f33-472a-b8fa-7122377ba27b 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Updated VIF entry in instance network info cache for port a471e230-aa12-49dc-959a-7630183c2e5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:20:39 compute-0 nova_compute[249229]: 2026-01-23 10:20:39.015 249233 DEBUG nova.network.neutron [req-96350966-96ad-45dc-a02f-341d8cbdf2f5 req-6f9f107a-2f33-472a-b8fa-7122377ba27b 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Updating instance_info_cache with network_info: [{"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:20:39 compute-0 ceph-mon[74335]: pgmap v821: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:20:39 compute-0 nova_compute[249229]: 2026-01-23 10:20:39.038 249233 DEBUG oslo_concurrency.lockutils [req-96350966-96ad-45dc-a02f-341d8cbdf2f5 req-6f9f107a-2f33-472a-b8fa-7122377ba27b 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-73e0ca50-1f94-47c9-afce-7591b733d68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:20:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:39 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:39 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:39 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:20:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:39.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:20:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:39] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Jan 23 10:20:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:39] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Jan 23 10:20:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:20:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:40.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:41 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:41 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:41 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:41 compute-0 nova_compute[249229]: 2026-01-23 10:20:41.803 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:41 compute-0 ceph-mon[74335]: pgmap v822: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:20:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:41.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:20:42 compute-0 nova_compute[249229]: 2026-01-23 10:20:42.721 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:42.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:43 compute-0 ceph-mon[74335]: pgmap v823: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:20:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:43 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c80036b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:43 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:43 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:43.650Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:43.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:44 compute-0 sudo[262379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:20:44 compute-0 sudo[262379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:20:44 compute-0 sudo[262379]: pam_unix(sudo:session): session closed for user root
Jan 23 10:20:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 23 10:20:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:44.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:45 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:45 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:45 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:45 compute-0 podman[262405]: 2026-01-23 10:20:45.648219127 +0000 UTC m=+0.140801842 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 23 10:20:45 compute-0 ceph-mon[74335]: pgmap v824: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 23 10:20:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:45.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 192 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 23 10:20:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:46.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:46 compute-0 ovn_controller[151634]: 2026-01-23T10:20:46Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:ce:c8 10.100.0.6
Jan 23 10:20:46 compute-0 ovn_controller[151634]: 2026-01-23T10:20:46Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:ce:c8 10.100.0.6
Jan 23 10:20:46 compute-0 nova_compute[249229]: 2026-01-23 10:20:46.806 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:47 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:47 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:47 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:47 compute-0 nova_compute[249229]: 2026-01-23 10:20:47.723 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:47.763Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:47 compute-0 ceph-mon[74335]: pgmap v825: 353 pgs: 353 active+clean; 192 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 23 10:20:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:47.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:20:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2314219766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:20:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:20:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2314219766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:20:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 192 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 23 10:20:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:48.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2314219766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:20:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2314219766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:20:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:49 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:49 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0045d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:49 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:49] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Jan 23 10:20:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:49] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Jan 23 10:20:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:49.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:20:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:20:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:20:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:20:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:20:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:20:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:20:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:20:50 compute-0 ceph-mon[74335]: pgmap v826: 353 pgs: 353 active+clean; 192 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 23 10:20:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:20:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 192 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 23 10:20:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:50.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:51 compute-0 ceph-mon[74335]: pgmap v827: 353 pgs: 353 active+clean; 192 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 23 10:20:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:51 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7a80045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:51 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb79c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:51 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7bc0045f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:20:51 compute-0 nova_compute[249229]: 2026-01-23 10:20:51.809 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:51.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 23 10:20:52 compute-0 nova_compute[249229]: 2026-01-23 10:20:52.725 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:52.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:53 compute-0 kernel: ganesha.nfsd[259682]: segfault at 50 ip 00007fb84d64132e sp 00007fb7b1ffa210 error 4 in libntirpc.so.5.8[7fb84d626000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 23 10:20:53 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 10:20:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[258615]: 23/01/2026 10:20:53 : epoch 69734b0d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7c8003fd0 fd 38 proxy ignored for local
Jan 23 10:20:53 compute-0 systemd[1]: Started Process Core Dump (PID 262440/UID 0).
Jan 23 10:20:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:53.651Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:53 compute-0 ceph-mon[74335]: pgmap v828: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 23 10:20:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:53.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:54 compute-0 nova_compute[249229]: 2026-01-23 10:20:54.041 249233 INFO nova.compute.manager [None req-36eee382-5912-43d3-a4e1-ac7615f35790 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Get console output
Jan 23 10:20:54 compute-0 nova_compute[249229]: 2026-01-23 10:20:54.051 255486 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 23 10:20:54 compute-0 nova_compute[249229]: 2026-01-23 10:20:54.372 249233 DEBUG oslo_concurrency.lockutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "73e0ca50-1f94-47c9-afce-7591b733d68d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:54 compute-0 nova_compute[249229]: 2026-01-23 10:20:54.373 249233 DEBUG oslo_concurrency.lockutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:54 compute-0 nova_compute[249229]: 2026-01-23 10:20:54.373 249233 DEBUG oslo_concurrency.lockutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:54 compute-0 nova_compute[249229]: 2026-01-23 10:20:54.373 249233 DEBUG oslo_concurrency.lockutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:54 compute-0 nova_compute[249229]: 2026-01-23 10:20:54.373 249233 DEBUG oslo_concurrency.lockutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:54 compute-0 nova_compute[249229]: 2026-01-23 10:20:54.375 249233 INFO nova.compute.manager [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Terminating instance
Jan 23 10:20:54 compute-0 nova_compute[249229]: 2026-01-23 10:20:54.379 249233 DEBUG nova.compute.manager [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 10:20:54 compute-0 podman[262443]: 2026-01-23 10:20:54.544628782 +0000 UTC m=+0.074154095 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 10:20:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:20:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:54.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:55 compute-0 systemd-coredump[262441]: Process 258621 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007fb84d64132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 10:20:55 compute-0 ceph-mon[74335]: pgmap v829: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 10:20:55 compute-0 kernel: tapa471e230-aa (unregistering): left promiscuous mode
Jan 23 10:20:55 compute-0 NetworkManager[48866]: <info>  [1769163655.7089] device (tapa471e230-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 10:20:55 compute-0 ovn_controller[151634]: 2026-01-23T10:20:55Z|00046|binding|INFO|Releasing lport a471e230-aa12-49dc-959a-7630183c2e5a from this chassis (sb_readonly=0)
Jan 23 10:20:55 compute-0 ovn_controller[151634]: 2026-01-23T10:20:55Z|00047|binding|INFO|Setting lport a471e230-aa12-49dc-959a-7630183c2e5a down in Southbound
Jan 23 10:20:55 compute-0 ovn_controller[151634]: 2026-01-23T10:20:55Z|00048|binding|INFO|Removing iface tapa471e230-aa ovn-installed in OVS
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.715 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.717 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:55 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:55.723 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:ce:c8 10.100.0.6'], port_security=['fa:16:3e:ab:ce:c8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '73e0ca50-1f94-47c9-afce-7591b733d68d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13f52d23-9898-43a0-a951-b69cb2abebab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fe44e39-9cd5-4125-8e95-de3941586911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30316727-f942-4d99-94ec-26d1184b5c8a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], logical_port=a471e230-aa12-49dc-959a-7630183c2e5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:20:55 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:55.725 161921 INFO neutron.agent.ovn.metadata.agent [-] Port a471e230-aa12-49dc-959a-7630183c2e5a in datapath 13f52d23-9898-43a0-a951-b69cb2abebab unbound from our chassis
Jan 23 10:20:55 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:55.726 161921 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13f52d23-9898-43a0-a951-b69cb2abebab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 23 10:20:55 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:55.728 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[540028ab-62af-46e0-a8ff-141ab162ecfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:55 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:55.729 161921 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab namespace which is not needed anymore
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.737 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:55 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 23 10:20:55 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 13.912s CPU time.
Jan 23 10:20:55 compute-0 systemd-machined[216411]: Machine qemu-2-instance-00000005 terminated.
Jan 23 10:20:55 compute-0 systemd[1]: systemd-coredump@10-262440-0.service: Deactivated successfully.
Jan 23 10:20:55 compute-0 systemd[1]: systemd-coredump@10-262440-0.service: Consumed 1.343s CPU time.
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.827 249233 INFO nova.virt.libvirt.driver [-] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Instance destroyed successfully.
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.828 249233 DEBUG nova.objects.instance [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'resources' on Instance uuid 73e0ca50-1f94-47c9-afce-7591b733d68d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:20:55 compute-0 podman[262493]: 2026-01-23 10:20:55.833343184 +0000 UTC m=+0.028876466 container died 12c3c919b5ab32a440341b9db41c42bf5cfc0858c3ee572f8f7ed1fe7702536b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.841 249233 DEBUG nova.virt.libvirt.vif [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:20:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-646374583',display_name='tempest-TestNetworkBasicOps-server-646374583',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-646374583',id=5,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDt/A8rkqa34y9MBXMhsynMg1qj7m+Ndzad2AKhU4utcMwCBXtgQgdTx8HvqmCksU/dEcK+Ccws9jQ2U2f6VAlc3FD8bbqobwsuDEt0sD3tDzFEsYGAiF5NXnjtkI+37ug==',key_name='tempest-TestNetworkBasicOps-125336857',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:20:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-s8nsgdou',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:20:34Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=73e0ca50-1f94-47c9-afce-7591b733d68d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.841 249233 DEBUG nova.network.os_vif_util [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "a471e230-aa12-49dc-959a-7630183c2e5a", "address": "fa:16:3e:ab:ce:c8", "network": {"id": "13f52d23-9898-43a0-a951-b69cb2abebab", "bridge": "br-int", "label": "tempest-network-smoke--2142663020", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa471e230-aa", "ovs_interfaceid": "a471e230-aa12-49dc-959a-7630183c2e5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.842 249233 DEBUG nova.network.os_vif_util [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:ce:c8,bridge_name='br-int',has_traffic_filtering=True,id=a471e230-aa12-49dc-959a-7630183c2e5a,network=Network(13f52d23-9898-43a0-a951-b69cb2abebab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa471e230-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.843 249233 DEBUG os_vif [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:ce:c8,bridge_name='br-int',has_traffic_filtering=True,id=a471e230-aa12-49dc-959a-7630183c2e5a,network=Network(13f52d23-9898-43a0-a951-b69cb2abebab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa471e230-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.845 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.846 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa471e230-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.847 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.849 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:55 compute-0 nova_compute[249229]: 2026-01-23 10:20:55.853 249233 INFO os_vif [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:ce:c8,bridge_name='br-int',has_traffic_filtering=True,id=a471e230-aa12-49dc-959a-7630183c2e5a,network=Network(13f52d23-9898-43a0-a951-b69cb2abebab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa471e230-aa')
Jan 23 10:20:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-42dc14ed0f14d986ca5a2fa627839c4d7e75013f733629a0eed8b0d13ae7dec9-merged.mount: Deactivated successfully.
Jan 23 10:20:55 compute-0 podman[262493]: 2026-01-23 10:20:55.871338503 +0000 UTC m=+0.066871765 container remove 12c3c919b5ab32a440341b9db41c42bf5cfc0858c3ee572f8f7ed1fe7702536b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:20:55 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 10:20:55 compute-0 neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab[262327]: [NOTICE]   (262338) : haproxy version is 2.8.14-c23fe91
Jan 23 10:20:55 compute-0 neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab[262327]: [NOTICE]   (262338) : path to executable is /usr/sbin/haproxy
Jan 23 10:20:55 compute-0 neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab[262327]: [WARNING]  (262338) : Exiting Master process...
Jan 23 10:20:55 compute-0 neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab[262327]: [WARNING]  (262338) : Exiting Master process...
Jan 23 10:20:55 compute-0 neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab[262327]: [ALERT]    (262338) : Current worker (262356) exited with code 143 (Terminated)
Jan 23 10:20:55 compute-0 neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab[262327]: [WARNING]  (262338) : All workers exited. Exiting... (0)
Jan 23 10:20:55 compute-0 systemd[1]: libpod-6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a.scope: Deactivated successfully.
Jan 23 10:20:55 compute-0 podman[262507]: 2026-01-23 10:20:55.90201005 +0000 UTC m=+0.079667425 container died 6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 10:20:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a-userdata-shm.mount: Deactivated successfully.
Jan 23 10:20:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-de9014d23db93148d25260a60edba433baf561a23f3c8707f2214177c0dea4cb-merged.mount: Deactivated successfully.
Jan 23 10:20:55 compute-0 podman[262507]: 2026-01-23 10:20:55.934588362 +0000 UTC m=+0.112245727 container cleanup 6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 10:20:55 compute-0 systemd[1]: libpod-conmon-6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a.scope: Deactivated successfully.
Jan 23 10:20:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:55.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:55 compute-0 podman[262575]: 2026-01-23 10:20:55.995876104 +0000 UTC m=+0.041603414 container remove 6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 10:20:56 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:56.001 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[92a9ee11-827e-4627-b860-65690a6d7e13]: (4, ('Fri Jan 23 10:20:55 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab (6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a)\n6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a\nFri Jan 23 10:20:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab (6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a)\n6c447e8c7eff6dbb6d169f4ea9e6bb75f7e6d593ce6fcc1e7b963c5b77dc7d0a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:56 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:56.004 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[5119b4ac-d35d-4b83-b8c3-2753506c4d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:56 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:56.006 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13f52d23-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:20:56 compute-0 kernel: tap13f52d23-90: left promiscuous mode
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.009 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.023 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:56 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:56.028 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[515a98d6-5c56-489a-aafd-ac5cac3a7ed6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:56 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:56.038 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[574c8a42-ff84-475d-a434-800611de377a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:56 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:56.040 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[c7541798-5055-4edc-a245-d00bdc0c579c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:56 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:20:56 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.695s CPU time.
Jan 23 10:20:56 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:56.056 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[72cec5de-929f-47d3-863e-805480021c89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475460, 'reachable_time': 17872, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262611, 'error': None, 'target': 'ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:56 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:56.061 162436 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-13f52d23-9898-43a0-a951-b69cb2abebab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 23 10:20:56 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:56.061 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[9a31c3c5-28fd-451b-8e0c-6ce7cda89d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:20:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d13f52d23\x2d9898\x2d43a0\x2da951\x2db69cb2abebab.mount: Deactivated successfully.
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.477 249233 INFO nova.virt.libvirt.driver [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Deleting instance files /var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d_del
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.478 249233 INFO nova.virt.libvirt.driver [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Deletion of /var/lib/nova/instances/73e0ca50-1f94-47c9-afce-7591b733d68d_del complete
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.539 249233 INFO nova.compute.manager [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Took 2.16 seconds to destroy the instance on the hypervisor.
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.539 249233 DEBUG oslo.service.loopingcall [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.540 249233 DEBUG nova.compute.manager [-] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.540 249233 DEBUG nova.network.neutron [-] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 10:20:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.671 249233 DEBUG nova.compute.manager [req-806abff3-7add-431a-94fa-9dc97a433fea req-aaae5b37-68a4-445e-a59c-8f749b3fd189 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received event network-vif-unplugged-a471e230-aa12-49dc-959a-7630183c2e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.672 249233 DEBUG oslo_concurrency.lockutils [req-806abff3-7add-431a-94fa-9dc97a433fea req-aaae5b37-68a4-445e-a59c-8f749b3fd189 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.672 249233 DEBUG oslo_concurrency.lockutils [req-806abff3-7add-431a-94fa-9dc97a433fea req-aaae5b37-68a4-445e-a59c-8f749b3fd189 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.672 249233 DEBUG oslo_concurrency.lockutils [req-806abff3-7add-431a-94fa-9dc97a433fea req-aaae5b37-68a4-445e-a59c-8f749b3fd189 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.672 249233 DEBUG nova.compute.manager [req-806abff3-7add-431a-94fa-9dc97a433fea req-aaae5b37-68a4-445e-a59c-8f749b3fd189 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] No waiting events found dispatching network-vif-unplugged-a471e230-aa12-49dc-959a-7630183c2e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.672 249233 DEBUG nova.compute.manager [req-806abff3-7add-431a-94fa-9dc97a433fea req-aaae5b37-68a4-445e-a59c-8f749b3fd189 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received event network-vif-unplugged-a471e230-aa12-49dc-959a-7630183c2e5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:20:56 compute-0 nova_compute[249229]: 2026-01-23 10:20:56.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 10:20:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:56.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:20:57 compute-0 nova_compute[249229]: 2026-01-23 10:20:57.729 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:20:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:57.764Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:20:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:20:57.765Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:20:57 compute-0 ceph-mon[74335]: pgmap v830: 353 pgs: 353 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 23 10:20:57 compute-0 nova_compute[249229]: 2026-01-23 10:20:57.971 249233 DEBUG nova.network.neutron [-] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:20:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:20:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:57.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:20:57 compute-0 nova_compute[249229]: 2026-01-23 10:20:57.993 249233 INFO nova.compute.manager [-] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Took 1.45 seconds to deallocate network for instance.
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.042 249233 DEBUG oslo_concurrency.lockutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.043 249233 DEBUG oslo_concurrency.lockutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.108 249233 DEBUG oslo_concurrency.processutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:20:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:20:58 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1700923100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.564 249233 DEBUG oslo_concurrency.processutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.571 249233 DEBUG nova.compute.provider_tree [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.587 249233 DEBUG nova.scheduler.client.report [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.612 249233 DEBUG oslo_concurrency.lockutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 105 KiB/s wr, 39 op/s
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.646 249233 INFO nova.scheduler.client.report [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Deleted allocations for instance 73e0ca50-1f94-47c9-afce-7591b733d68d
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.734 249233 DEBUG oslo_concurrency.lockutils [None req-88e2a281-89c7-4afd-abb7-ae363adf6e5b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.753 249233 DEBUG nova.compute.manager [req-5f353460-d280-4692-8392-c84932af97dc req-b637c530-dfcb-4b9b-907c-199b6ae6877f 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received event network-vif-plugged-a471e230-aa12-49dc-959a-7630183c2e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.754 249233 DEBUG oslo_concurrency.lockutils [req-5f353460-d280-4692-8392-c84932af97dc req-b637c530-dfcb-4b9b-907c-199b6ae6877f 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.754 249233 DEBUG oslo_concurrency.lockutils [req-5f353460-d280-4692-8392-c84932af97dc req-b637c530-dfcb-4b9b-907c-199b6ae6877f 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.754 249233 DEBUG oslo_concurrency.lockutils [req-5f353460-d280-4692-8392-c84932af97dc req-b637c530-dfcb-4b9b-907c-199b6ae6877f 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "73e0ca50-1f94-47c9-afce-7591b733d68d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.754 249233 DEBUG nova.compute.manager [req-5f353460-d280-4692-8392-c84932af97dc req-b637c530-dfcb-4b9b-907c-199b6ae6877f 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] No waiting events found dispatching network-vif-plugged-a471e230-aa12-49dc-959a-7630183c2e5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.755 249233 WARNING nova.compute.manager [req-5f353460-d280-4692-8392-c84932af97dc req-b637c530-dfcb-4b9b-907c-199b6ae6877f 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received unexpected event network-vif-plugged-a471e230-aa12-49dc-959a-7630183c2e5a for instance with vm_state deleted and task_state None.
Jan 23 10:20:58 compute-0 nova_compute[249229]: 2026-01-23 10:20:58.755 249233 DEBUG nova.compute.manager [req-5f353460-d280-4692-8392-c84932af97dc req-b637c530-dfcb-4b9b-907c-199b6ae6877f 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Received event network-vif-deleted-a471e230-aa12-49dc-959a-7630183c2e5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:20:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:58.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:20:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1700923100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:20:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:59.774 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:20:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:59.775 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:20:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:20:59.776 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:20:59 compute-0 ceph-mon[74335]: pgmap v831: 353 pgs: 353 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 105 KiB/s wr, 39 op/s
Jan 23 10:20:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:59] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Jan 23 10:20:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:20:59] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Jan 23 10:20:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:20:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:20:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:59.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 105 KiB/s wr, 39 op/s
Jan 23 10:21:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:21:00.662 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:21:00 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:21:00.663 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:21:00 compute-0 nova_compute[249229]: 2026-01-23 10:21:00.663 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:00.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:00 compute-0 nova_compute[249229]: 2026-01-23 10:21:00.848 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102101 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:21:01 compute-0 ceph-mon[74335]: pgmap v832: 353 pgs: 353 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 105 KiB/s wr, 39 op/s
Jan 23 10:21:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:01.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 105 KiB/s wr, 48 op/s
Jan 23 10:21:02 compute-0 nova_compute[249229]: 2026-01-23 10:21:02.725 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:02 compute-0 nova_compute[249229]: 2026-01-23 10:21:02.741 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:02 compute-0 nova_compute[249229]: 2026-01-23 10:21:02.759 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:21:02 compute-0 nova_compute[249229]: 2026-01-23 10:21:02.760 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:21:02 compute-0 nova_compute[249229]: 2026-01-23 10:21:02.760 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:21:02 compute-0 nova_compute[249229]: 2026-01-23 10:21:02.760 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:21:02 compute-0 nova_compute[249229]: 2026-01-23 10:21:02.760 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:21:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:02.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:02 compute-0 nova_compute[249229]: 2026-01-23 10:21:02.786 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:21:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3874868156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:03 compute-0 nova_compute[249229]: 2026-01-23 10:21:03.240 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:21:03 compute-0 nova_compute[249229]: 2026-01-23 10:21:03.406 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:21:03 compute-0 nova_compute[249229]: 2026-01-23 10:21:03.407 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4593MB free_disk=59.94269943237305GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:21:03 compute-0 nova_compute[249229]: 2026-01-23 10:21:03.408 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:21:03 compute-0 nova_compute[249229]: 2026-01-23 10:21:03.408 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:21:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:03.652Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:03 compute-0 ceph-mon[74335]: pgmap v833: 353 pgs: 353 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 105 KiB/s wr, 48 op/s
Jan 23 10:21:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:03.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:04 compute-0 sudo[262666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:21:04 compute-0 sudo[262666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:04 compute-0 sudo[262666]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:04 compute-0 nova_compute[249229]: 2026-01-23 10:21:04.400 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:21:04 compute-0 nova_compute[249229]: 2026-01-23 10:21:04.400 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:21:04 compute-0 nova_compute[249229]: 2026-01-23 10:21:04.448 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:21:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 23 10:21:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:21:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:04.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:21:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:21:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548080809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:05 compute-0 nova_compute[249229]: 2026-01-23 10:21:05.005 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:21:05 compute-0 nova_compute[249229]: 2026-01-23 10:21:05.010 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:21:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:21:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:21:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3874868156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/215747545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:05 compute-0 nova_compute[249229]: 2026-01-23 10:21:05.312 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:21:05 compute-0 nova_compute[249229]: 2026-01-23 10:21:05.661 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:21:05 compute-0 nova_compute[249229]: 2026-01-23 10:21:05.662 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:21:05 compute-0 nova_compute[249229]: 2026-01-23 10:21:05.850 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:05.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:06 compute-0 ceph-mon[74335]: pgmap v834: 353 pgs: 353 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 23 10:21:06 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2548080809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:21:06 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 11.
Jan 23 10:21:06 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:21:06 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.695s CPU time.
Jan 23 10:21:06 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920...
Jan 23 10:21:06 compute-0 podman[262767]: 2026-01-23 10:21:06.408355886 +0000 UTC m=+0.039710799 container create 7431735b9f593f91a26e051f7d5d7ca98041b8e4ab84f7742a2cafcd1841742a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 10:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c13e6d917960ee83c3d51fccbaf7e19e7614ad0577b76ac381cc631930ca60/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c13e6d917960ee83c3d51fccbaf7e19e7614ad0577b76ac381cc631930ca60/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c13e6d917960ee83c3d51fccbaf7e19e7614ad0577b76ac381cc631930ca60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c13e6d917960ee83c3d51fccbaf7e19e7614ad0577b76ac381cc631930ca60/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fenqiu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:06 compute-0 podman[262767]: 2026-01-23 10:21:06.464275133 +0000 UTC m=+0.095630046 container init 7431735b9f593f91a26e051f7d5d7ca98041b8e4ab84f7742a2cafcd1841742a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 10:21:06 compute-0 podman[262767]: 2026-01-23 10:21:06.469568396 +0000 UTC m=+0.100923309 container start 7431735b9f593f91a26e051f7d5d7ca98041b8e4ab84f7742a2cafcd1841742a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:21:06 compute-0 bash[262767]: 7431735b9f593f91a26e051f7d5d7ca98041b8e4ab84f7742a2cafcd1841742a
Jan 23 10:21:06 compute-0 podman[262767]: 2026-01-23 10:21:06.39086129 +0000 UTC m=+0.022216213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:21:06 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:21:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:06 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 23 10:21:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:06 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 23 10:21:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:06 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 23 10:21:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:06 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 23 10:21:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:06 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 23 10:21:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:06 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 23 10:21:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:06 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 23 10:21:06 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:06 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:21:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 56 op/s
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.637 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.637 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.638 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.651 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.652 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.652 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.652 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.652 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:21:06 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:21:06.665 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:06 compute-0 nova_compute[249229]: 2026-01-23 10:21:06.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:06.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:07 compute-0 nova_compute[249229]: 2026-01-23 10:21:07.719 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:07 compute-0 nova_compute[249229]: 2026-01-23 10:21:07.719 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:07.766Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:07 compute-0 nova_compute[249229]: 2026-01-23 10:21:07.774 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1928470914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:07 compute-0 ceph-mon[74335]: pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 56 op/s
Jan 23 10:21:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/731693582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:07.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Jan 23 10:21:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:08.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3840207814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3507747902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:09 compute-0 nova_compute[249229]: 2026-01-23 10:21:09.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:09 compute-0 nova_compute[249229]: 2026-01-23 10:21:09.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:09 compute-0 nova_compute[249229]: 2026-01-23 10:21:09.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 10:21:09 compute-0 nova_compute[249229]: 2026-01-23 10:21:09.734 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 10:21:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:09] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Jan 23 10:21:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:09] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Jan 23 10:21:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:09.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Jan 23 10:21:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:10.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:10 compute-0 nova_compute[249229]: 2026-01-23 10:21:10.824 249233 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163655.8228476, 73e0ca50-1f94-47c9-afce-7591b733d68d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:21:10 compute-0 nova_compute[249229]: 2026-01-23 10:21:10.825 249233 INFO nova.compute.manager [-] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] VM Stopped (Lifecycle Event)
Jan 23 10:21:10 compute-0 nova_compute[249229]: 2026-01-23 10:21:10.853 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:10 compute-0 nova_compute[249229]: 2026-01-23 10:21:10.986 249233 DEBUG nova.compute.manager [None req-44c7e184-2f71-4bd3-9dac-6f4016fcba9b - - - - - -] [instance: 73e0ca50-1f94-47c9-afce-7591b733d68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:21:11 compute-0 ceph-mon[74335]: pgmap v836: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Jan 23 10:21:11 compute-0 nova_compute[249229]: 2026-01-23 10:21:11.586 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:11 compute-0 nova_compute[249229]: 2026-01-23 10:21:11.674 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:11.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:12 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:21:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:12 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:21:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 KiB/s wr, 35 op/s
Jan 23 10:21:12 compute-0 ceph-mon[74335]: pgmap v837: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Jan 23 10:21:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:12.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:12 compute-0 nova_compute[249229]: 2026-01-23 10:21:12.828 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:13.653Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:13 compute-0 ceph-mon[74335]: pgmap v838: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 KiB/s wr, 35 op/s
Jan 23 10:21:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:14.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 KiB/s wr, 26 op/s
Jan 23 10:21:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:14.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:15 compute-0 nova_compute[249229]: 2026-01-23 10:21:15.855 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:15 compute-0 ceph-mon[74335]: pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 KiB/s wr, 26 op/s
Jan 23 10:21:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:16.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:16 compute-0 podman[262835]: 2026-01-23 10:21:16.55357395 +0000 UTC m=+0.081881069 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 10:21:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 23 10:21:16 compute-0 nova_compute[249229]: 2026-01-23 10:21:16.649 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:21:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:16.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:17.767Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:21:17 compute-0 nova_compute[249229]: 2026-01-23 10:21:17.830 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:18.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:18 compute-0 ceph-mon[74335]: pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 23 10:21:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 23 10:21:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:18 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 23 10:21:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:18.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:19 compute-0 ceph-mon[74335]: pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:21:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe658000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:19] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Jan 23 10:21:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:19] "GET /metrics HTTP/1.1" 200 48560 "" "Prometheus/2.51.0"
Jan 23 10:21:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:20.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:21:20
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['vms', '.nfs', '.mgr', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:21:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:21:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:21:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:21:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:21:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:20.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:20 compute-0 nova_compute[249229]: 2026-01-23 10:21:20.857 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102121 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:21:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:21 compute-0 ceph-mon[74335]: pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:21:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:22.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:21:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:22 compute-0 nova_compute[249229]: 2026-01-23 10:21:22.832 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:23.654Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:23 compute-0 ceph-mon[74335]: pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 10:21:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:24.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:24 compute-0 sudo[262884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:21:24 compute-0 sudo[262884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:24 compute-0 sudo[262884]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:21:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:25 compute-0 podman[262910]: 2026-01-23 10:21:25.53340229 +0000 UTC m=+0.057580295 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 10:21:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:25 compute-0 nova_compute[249229]: 2026-01-23 10:21:25.859 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:25 compute-0 ceph-mon[74335]: pgmap v844: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:21:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:26.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:21:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:26.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:27 compute-0 ceph-mon[74335]: pgmap v845: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:21:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:27.768Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:27 compute-0 nova_compute[249229]: 2026-01-23 10:21:27.835 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:28.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:21:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:28.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:29 compute-0 ceph-mon[74335]: pgmap v846: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:21:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:29] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:21:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:29] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:21:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:30.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:21:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:30.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:30 compute-0 nova_compute[249229]: 2026-01-23 10:21:30.861 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:31 compute-0 ceph-mon[74335]: pgmap v847: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:21:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:32.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:21:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:32.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:32 compute-0 nova_compute[249229]: 2026-01-23 10:21:32.837 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:33.655Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:21:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:33.655Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:21:33 compute-0 ceph-mon[74335]: pgmap v848: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:21:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:34.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:34 compute-0 sudo[262938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:21:34 compute-0 sudo[262938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:34 compute-0 sudo[262938]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:34 compute-0 sudo[262963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:21:34 compute-0 sudo[262963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:21:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:34.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:21:34 compute-0 sudo[262963]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:21:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:21:35 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3733854137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:21:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:35 compute-0 nova_compute[249229]: 2026-01-23 10:21:35.864 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:36.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 62 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 873 KiB/s wr, 14 op/s
Jan 23 10:21:36 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:21:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:36.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6340030d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:37 compute-0 ceph-mon[74335]: pgmap v849: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:21:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:21:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:37.770Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:37 compute-0 nova_compute[249229]: 2026-01-23 10:21:37.839 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:38.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 84 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 23 10:21:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:21:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:38.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:21:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:21:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:21:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:21:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:21:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:21:38 compute-0 ceph-mon[74335]: pgmap v850: 353 pgs: 353 active+clean; 62 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 873 KiB/s wr, 14 op/s
Jan 23 10:21:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:38 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:21:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6340030d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:21:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:21:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:21:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:21:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:21:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:21:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:39 compute-0 sudo[263025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:21:39 compute-0 sudo[263025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:39 compute-0 sudo[263025]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:39 compute-0 sudo[263050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:21:39 compute-0 sudo[263050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:39] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 23 10:21:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:39] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 23 10:21:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:40.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:40 compute-0 ceph-mon[74335]: pgmap v851: 353 pgs: 353 active+clean; 84 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 23 10:21:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:21:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:21:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:21:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:21:40 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:21:40 compute-0 podman[263114]: 2026-01-23 10:21:40.152211528 +0000 UTC m=+0.040257115 container create 9f2385c41d7cbeb1abada2e4745c1da3807a0d4f5a6bc4e8a9f8dd9330e33fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 10:21:40 compute-0 systemd[1]: Started libpod-conmon-9f2385c41d7cbeb1abada2e4745c1da3807a0d4f5a6bc4e8a9f8dd9330e33fc2.scope.
Jan 23 10:21:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:21:40 compute-0 podman[263114]: 2026-01-23 10:21:40.135851855 +0000 UTC m=+0.023897472 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:21:40 compute-0 podman[263114]: 2026-01-23 10:21:40.236176316 +0000 UTC m=+0.124221933 container init 9f2385c41d7cbeb1abada2e4745c1da3807a0d4f5a6bc4e8a9f8dd9330e33fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curran, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 10:21:40 compute-0 podman[263114]: 2026-01-23 10:21:40.242771187 +0000 UTC m=+0.130816774 container start 9f2385c41d7cbeb1abada2e4745c1da3807a0d4f5a6bc4e8a9f8dd9330e33fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:21:40 compute-0 podman[263114]: 2026-01-23 10:21:40.246105283 +0000 UTC m=+0.134150900 container attach 9f2385c41d7cbeb1abada2e4745c1da3807a0d4f5a6bc4e8a9f8dd9330e33fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curran, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:21:40 compute-0 quirky_curran[263131]: 167 167
Jan 23 10:21:40 compute-0 systemd[1]: libpod-9f2385c41d7cbeb1abada2e4745c1da3807a0d4f5a6bc4e8a9f8dd9330e33fc2.scope: Deactivated successfully.
Jan 23 10:21:40 compute-0 podman[263114]: 2026-01-23 10:21:40.249194193 +0000 UTC m=+0.137239780 container died 9f2385c41d7cbeb1abada2e4745c1da3807a0d4f5a6bc4e8a9f8dd9330e33fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curran, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae6a0d8bfad6763d3995b1d03b1730a5af7e144ce4df21854e52fefe2dccdcc8-merged.mount: Deactivated successfully.
Jan 23 10:21:40 compute-0 podman[263114]: 2026-01-23 10:21:40.283914046 +0000 UTC m=+0.171959623 container remove 9f2385c41d7cbeb1abada2e4745c1da3807a0d4f5a6bc4e8a9f8dd9330e33fc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 10:21:40 compute-0 systemd[1]: libpod-conmon-9f2385c41d7cbeb1abada2e4745c1da3807a0d4f5a6bc4e8a9f8dd9330e33fc2.scope: Deactivated successfully.
Jan 23 10:21:40 compute-0 podman[263155]: 2026-01-23 10:21:40.459655158 +0000 UTC m=+0.049763680 container create 0089e491ee91d66ff2d229a31665adf347d09798608d3b117e652a5b428c1dcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:21:40 compute-0 systemd[1]: Started libpod-conmon-0089e491ee91d66ff2d229a31665adf347d09798608d3b117e652a5b428c1dcc.scope.
Jan 23 10:21:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:21:40 compute-0 podman[263155]: 2026-01-23 10:21:40.440547485 +0000 UTC m=+0.030656047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ed5ea0e6e671e30bb051d40427743fa1813fc6736027fabca846c6c4c25ba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ed5ea0e6e671e30bb051d40427743fa1813fc6736027fabca846c6c4c25ba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ed5ea0e6e671e30bb051d40427743fa1813fc6736027fabca846c6c4c25ba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ed5ea0e6e671e30bb051d40427743fa1813fc6736027fabca846c6c4c25ba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ed5ea0e6e671e30bb051d40427743fa1813fc6736027fabca846c6c4c25ba6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:40 compute-0 podman[263155]: 2026-01-23 10:21:40.558809105 +0000 UTC m=+0.148917647 container init 0089e491ee91d66ff2d229a31665adf347d09798608d3b117e652a5b428c1dcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_euler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 10:21:40 compute-0 podman[263155]: 2026-01-23 10:21:40.565772476 +0000 UTC m=+0.155881008 container start 0089e491ee91d66ff2d229a31665adf347d09798608d3b117e652a5b428c1dcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_euler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:21:40 compute-0 podman[263155]: 2026-01-23 10:21:40.568810624 +0000 UTC m=+0.158919156 container attach 0089e491ee91d66ff2d229a31665adf347d09798608d3b117e652a5b428c1dcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:21:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 84 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 23 10:21:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:40.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:40 compute-0 nova_compute[249229]: 2026-01-23 10:21:40.865 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:40 compute-0 strange_euler[263173]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:21:40 compute-0 strange_euler[263173]: --> All data devices are unavailable
Jan 23 10:21:40 compute-0 systemd[1]: libpod-0089e491ee91d66ff2d229a31665adf347d09798608d3b117e652a5b428c1dcc.scope: Deactivated successfully.
Jan 23 10:21:40 compute-0 podman[263155]: 2026-01-23 10:21:40.941645864 +0000 UTC m=+0.531754396 container died 0089e491ee91d66ff2d229a31665adf347d09798608d3b117e652a5b428c1dcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 10:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-83ed5ea0e6e671e30bb051d40427743fa1813fc6736027fabca846c6c4c25ba6-merged.mount: Deactivated successfully.
Jan 23 10:21:40 compute-0 podman[263155]: 2026-01-23 10:21:40.982733292 +0000 UTC m=+0.572841814 container remove 0089e491ee91d66ff2d229a31665adf347d09798608d3b117e652a5b428c1dcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:21:40 compute-0 systemd[1]: libpod-conmon-0089e491ee91d66ff2d229a31665adf347d09798608d3b117e652a5b428c1dcc.scope: Deactivated successfully.
Jan 23 10:21:41 compute-0 sudo[263050]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:41 compute-0 sudo[263198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:21:41 compute-0 sudo[263198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:41 compute-0 sudo[263198]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:41 compute-0 sudo[263223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:21:41 compute-0 sudo[263223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:41 compute-0 ceph-mon[74335]: pgmap v852: 353 pgs: 353 active+clean; 84 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 23 10:21:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1204880216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:21:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:41 compute-0 podman[263286]: 2026-01-23 10:21:41.504037885 +0000 UTC m=+0.038574076 container create ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_swartz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:21:41 compute-0 systemd[1]: Started libpod-conmon-ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563.scope.
Jan 23 10:21:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:21:41 compute-0 podman[263286]: 2026-01-23 10:21:41.580806025 +0000 UTC m=+0.115342226 container init ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:21:41 compute-0 podman[263286]: 2026-01-23 10:21:41.4889855 +0000 UTC m=+0.023521721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:21:41 compute-0 podman[263286]: 2026-01-23 10:21:41.589545387 +0000 UTC m=+0.124081578 container start ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_swartz, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 10:21:41 compute-0 silly_swartz[263302]: 167 167
Jan 23 10:21:41 compute-0 systemd[1]: libpod-ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563.scope: Deactivated successfully.
Jan 23 10:21:41 compute-0 conmon[263302]: conmon ac206ea2af371538ade9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563.scope/container/memory.events
Jan 23 10:21:41 compute-0 podman[263286]: 2026-01-23 10:21:41.602803931 +0000 UTC m=+0.137340112 container attach ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 10:21:41 compute-0 podman[263286]: 2026-01-23 10:21:41.603539952 +0000 UTC m=+0.138076143 container died ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_swartz, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:21:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-22bb2ae8691b118c112cff5a235c0556216943aa919e3528436f4d1ee8d2f52e-merged.mount: Deactivated successfully.
Jan 23 10:21:41 compute-0 podman[263286]: 2026-01-23 10:21:41.638596225 +0000 UTC m=+0.173132416 container remove ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_swartz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 10:21:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:41 compute-0 systemd[1]: libpod-conmon-ac206ea2af371538ade93023433bc2b0d818012b81e814d1b3c65bb416094563.scope: Deactivated successfully.
Jan 23 10:21:41 compute-0 podman[263325]: 2026-01-23 10:21:41.79333394 +0000 UTC m=+0.041684987 container create 9c44b9fdd0a2087f832541c146ff6a9f6e891ad5924b29af3a875e3424dd1939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:21:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:41 compute-0 systemd[1]: Started libpod-conmon-9c44b9fdd0a2087f832541c146ff6a9f6e891ad5924b29af3a875e3424dd1939.scope.
Jan 23 10:21:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17dd7a8779116b92c026137157cca7ba75113edd9aa72385e46339ed947b2b40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17dd7a8779116b92c026137157cca7ba75113edd9aa72385e46339ed947b2b40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17dd7a8779116b92c026137157cca7ba75113edd9aa72385e46339ed947b2b40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17dd7a8779116b92c026137157cca7ba75113edd9aa72385e46339ed947b2b40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:41 compute-0 podman[263325]: 2026-01-23 10:21:41.776029019 +0000 UTC m=+0.024380086 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:21:41 compute-0 podman[263325]: 2026-01-23 10:21:41.87288981 +0000 UTC m=+0.121240877 container init 9c44b9fdd0a2087f832541c146ff6a9f6e891ad5924b29af3a875e3424dd1939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:21:41 compute-0 podman[263325]: 2026-01-23 10:21:41.880446208 +0000 UTC m=+0.128797255 container start 9c44b9fdd0a2087f832541c146ff6a9f6e891ad5924b29af3a875e3424dd1939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 10:21:41 compute-0 podman[263325]: 2026-01-23 10:21:41.883421264 +0000 UTC m=+0.131772311 container attach 9c44b9fdd0a2087f832541c146ff6a9f6e891ad5924b29af3a875e3424dd1939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:21:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:42.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:42 compute-0 kind_poincare[263343]: {
Jan 23 10:21:42 compute-0 kind_poincare[263343]:     "1": [
Jan 23 10:21:42 compute-0 kind_poincare[263343]:         {
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "devices": [
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "/dev/loop3"
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             ],
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "lv_name": "ceph_lv0",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "lv_size": "21470642176",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "name": "ceph_lv0",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "tags": {
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.cluster_name": "ceph",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.crush_device_class": "",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.encrypted": "0",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.osd_id": "1",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.type": "block",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.vdo": "0",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:                 "ceph.with_tpm": "0"
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             },
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "type": "block",
Jan 23 10:21:42 compute-0 kind_poincare[263343]:             "vg_name": "ceph_vg0"
Jan 23 10:21:42 compute-0 kind_poincare[263343]:         }
Jan 23 10:21:42 compute-0 kind_poincare[263343]:     ]
Jan 23 10:21:42 compute-0 kind_poincare[263343]: }
Jan 23 10:21:42 compute-0 systemd[1]: libpod-9c44b9fdd0a2087f832541c146ff6a9f6e891ad5924b29af3a875e3424dd1939.scope: Deactivated successfully.
Jan 23 10:21:42 compute-0 podman[263325]: 2026-01-23 10:21:42.155179202 +0000 UTC m=+0.403530279 container died 9c44b9fdd0a2087f832541c146ff6a9f6e891ad5924b29af3a875e3424dd1939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:21:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-17dd7a8779116b92c026137157cca7ba75113edd9aa72385e46339ed947b2b40-merged.mount: Deactivated successfully.
Jan 23 10:21:42 compute-0 podman[263325]: 2026-01-23 10:21:42.193992154 +0000 UTC m=+0.442343191 container remove 9c44b9fdd0a2087f832541c146ff6a9f6e891ad5924b29af3a875e3424dd1939 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poincare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:21:42 compute-0 systemd[1]: libpod-conmon-9c44b9fdd0a2087f832541c146ff6a9f6e891ad5924b29af3a875e3424dd1939.scope: Deactivated successfully.
Jan 23 10:21:42 compute-0 sudo[263223]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:42 compute-0 sudo[263364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:21:42 compute-0 sudo[263364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:42 compute-0 sudo[263364]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:42 compute-0 sudo[263389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:21:42 compute-0 sudo[263389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:42 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1226134693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:21:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:21:42 compute-0 podman[263457]: 2026-01-23 10:21:42.735012837 +0000 UTC m=+0.041232613 container create 305220bc77fa353f6bb8af7df4e295e9193bf7a3d4797d832f100e5f6e125b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 23 10:21:42 compute-0 systemd[1]: Started libpod-conmon-305220bc77fa353f6bb8af7df4e295e9193bf7a3d4797d832f100e5f6e125b9e.scope.
Jan 23 10:21:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:21:42 compute-0 podman[263457]: 2026-01-23 10:21:42.808917214 +0000 UTC m=+0.115137010 container init 305220bc77fa353f6bb8af7df4e295e9193bf7a3d4797d832f100e5f6e125b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 23 10:21:42 compute-0 podman[263457]: 2026-01-23 10:21:42.715500173 +0000 UTC m=+0.021719979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:21:42 compute-0 podman[263457]: 2026-01-23 10:21:42.816964407 +0000 UTC m=+0.123184183 container start 305220bc77fa353f6bb8af7df4e295e9193bf7a3d4797d832f100e5f6e125b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:21:42 compute-0 podman[263457]: 2026-01-23 10:21:42.820761616 +0000 UTC m=+0.126981412 container attach 305220bc77fa353f6bb8af7df4e295e9193bf7a3d4797d832f100e5f6e125b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:21:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:42 compute-0 nice_northcutt[263473]: 167 167
Jan 23 10:21:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:42 compute-0 systemd[1]: libpod-305220bc77fa353f6bb8af7df4e295e9193bf7a3d4797d832f100e5f6e125b9e.scope: Deactivated successfully.
Jan 23 10:21:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:42.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:42 compute-0 podman[263478]: 2026-01-23 10:21:42.8596145 +0000 UTC m=+0.025790307 container died 305220bc77fa353f6bb8af7df4e295e9193bf7a3d4797d832f100e5f6e125b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_northcutt, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:21:42 compute-0 nova_compute[249229]: 2026-01-23 10:21:42.894 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e346fddd6b53e2297c7824ca383b082307b2acd19a5543218239f995ed3cdf3-merged.mount: Deactivated successfully.
Jan 23 10:21:42 compute-0 podman[263478]: 2026-01-23 10:21:42.916416272 +0000 UTC m=+0.082592069 container remove 305220bc77fa353f6bb8af7df4e295e9193bf7a3d4797d832f100e5f6e125b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_northcutt, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 10:21:42 compute-0 systemd[1]: libpod-conmon-305220bc77fa353f6bb8af7df4e295e9193bf7a3d4797d832f100e5f6e125b9e.scope: Deactivated successfully.
Jan 23 10:21:43 compute-0 podman[263500]: 2026-01-23 10:21:43.081568958 +0000 UTC m=+0.041723588 container create d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hawking, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:21:43 compute-0 systemd[1]: Started libpod-conmon-d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662.scope.
Jan 23 10:21:43 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e107737fdd8baf46b75ea6d37d2800cb10ada96001e6cd2edc853b1283a1004c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e107737fdd8baf46b75ea6d37d2800cb10ada96001e6cd2edc853b1283a1004c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e107737fdd8baf46b75ea6d37d2800cb10ada96001e6cd2edc853b1283a1004c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e107737fdd8baf46b75ea6d37d2800cb10ada96001e6cd2edc853b1283a1004c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:21:43 compute-0 podman[263500]: 2026-01-23 10:21:43.153626811 +0000 UTC m=+0.113781451 container init d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:21:43 compute-0 podman[263500]: 2026-01-23 10:21:43.063751202 +0000 UTC m=+0.023905852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:21:43 compute-0 podman[263500]: 2026-01-23 10:21:43.16049952 +0000 UTC m=+0.120654150 container start d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 10:21:43 compute-0 podman[263500]: 2026-01-23 10:21:43.163440675 +0000 UTC m=+0.123595335 container attach d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:21:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:43 compute-0 ceph-mon[74335]: pgmap v853: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:21:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:43.656Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:43 compute-0 lvm[263591]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:21:43 compute-0 lvm[263591]: VG ceph_vg0 finished
Jan 23 10:21:43 compute-0 hungry_hawking[263517]: {}
Jan 23 10:21:43 compute-0 systemd[1]: libpod-d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662.scope: Deactivated successfully.
Jan 23 10:21:43 compute-0 podman[263500]: 2026-01-23 10:21:43.799597546 +0000 UTC m=+0.759752206 container died d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hawking, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:21:43 compute-0 systemd[1]: libpod-d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662.scope: Consumed 1.030s CPU time.
Jan 23 10:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e107737fdd8baf46b75ea6d37d2800cb10ada96001e6cd2edc853b1283a1004c-merged.mount: Deactivated successfully.
Jan 23 10:21:43 compute-0 podman[263500]: 2026-01-23 10:21:43.836439951 +0000 UTC m=+0.796594581 container remove d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 10:21:43 compute-0 systemd[1]: libpod-conmon-d4c9a32bb77670b324661d4cf89177b124426887c0d36e2efbad8602de210662.scope: Deactivated successfully.
Jan 23 10:21:43 compute-0 sudo[263389]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:21:43 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:21:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:44.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:44 compute-0 sudo[263609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:21:44 compute-0 sudo[263609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:44 compute-0 sudo[263609]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:44 compute-0 sudo[263635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:21:44 compute-0 sudo[263635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:21:44 compute-0 sudo[263635]: pam_unix(sudo:session): session closed for user root
Jan 23 10:21:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:21:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:44.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:21:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:45 compute-0 nova_compute[249229]: 2026-01-23 10:21:45.868 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:46.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.578181) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163706578432, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2166, "num_deletes": 251, "total_data_size": 4410921, "memory_usage": 4467680, "flush_reason": "Manual Compaction"}
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163706583387, "job": 0, "event": "table_file_deletion", "file_number": 55}
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163706585771, "job": 0, "event": "table_file_deletion", "file_number": 53}
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163706600778, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4220061, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24912, "largest_seqno": 27076, "table_properties": {"data_size": 4210426, "index_size": 6065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20446, "raw_average_key_size": 20, "raw_value_size": 4190923, "raw_average_value_size": 4207, "num_data_blocks": 262, "num_entries": 996, "num_filter_entries": 996, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163491, "oldest_key_time": 1769163491, "file_creation_time": 1769163706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 22632 microseconds, and 11109 cpu microseconds.
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.600850) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4220061 bytes OK
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.600892) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.602992) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.603015) EVENT_LOG_v1 {"time_micros": 1769163706603010, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.603055) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4402108, prev total WAL file size 4402815, number of live WAL files 2.
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.604313) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(4121KB)], [56(12MB)]
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163706604485, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 17044294, "oldest_snapshot_seqno": -1}
Jan 23 10:21:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 984 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5922 keys, 14851923 bytes, temperature: kUnknown
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163706727170, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14851923, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14811040, "index_size": 24965, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 150529, "raw_average_key_size": 25, "raw_value_size": 14702646, "raw_average_value_size": 2482, "num_data_blocks": 1017, "num_entries": 5922, "num_filter_entries": 5922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769163706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.727472) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14851923 bytes
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.728475) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.8 rd, 121.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.2 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 6445, records dropped: 523 output_compression: NoCompression
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.728491) EVENT_LOG_v1 {"time_micros": 1769163706728483, "job": 30, "event": "compaction_finished", "compaction_time_micros": 122758, "compaction_time_cpu_micros": 38674, "output_level": 6, "num_output_files": 1, "total_output_size": 14851923, "num_input_records": 6445, "num_output_records": 5922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163706729169, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163706731932, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.604182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.732045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.732052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.732053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.732055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:21:46 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:21:46.732056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:21:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:46.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:47 compute-0 podman[263662]: 2026-01-23 10:21:47.549316713 +0000 UTC m=+0.081848078 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 10:21:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:47.771Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:47 compute-0 ceph-mon[74335]: pgmap v854: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:21:47 compute-0 nova_compute[249229]: 2026-01-23 10:21:47.895 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:48.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:21:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1255014004' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:21:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:21:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1255014004' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:21:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 954 KiB/s wr, 86 op/s
Jan 23 10:21:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:48.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:49 compute-0 ceph-mon[74335]: pgmap v855: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 984 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 23 10:21:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1255014004' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:21:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1255014004' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:21:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:49] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 23 10:21:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:49] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 23 10:21:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:21:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:50.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:21:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:21:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:21:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:21:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:21:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:21:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:21:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:21:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:21:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 269 KiB/s wr, 85 op/s
Jan 23 10:21:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:50 compute-0 nova_compute[249229]: 2026-01-23 10:21:50.871 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:51 compute-0 ceph-mon[74335]: pgmap v856: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 954 KiB/s wr, 86 op/s
Jan 23 10:21:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:21:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe624000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:52.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:52 compute-0 ceph-mon[74335]: pgmap v857: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 269 KiB/s wr, 85 op/s
Jan 23 10:21:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 269 KiB/s wr, 85 op/s
Jan 23 10:21:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:52.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:52 compute-0 nova_compute[249229]: 2026-01-23 10:21:52.939 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:53 compute-0 ceph-mon[74335]: pgmap v858: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 269 KiB/s wr, 85 op/s
Jan 23 10:21:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:53.658Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:54.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 10:21:54 compute-0 ovn_controller[151634]: 2026-01-23T10:21:54Z|00049|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Jan 23 10:21:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:54.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:55 compute-0 nova_compute[249229]: 2026-01-23 10:21:55.872 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:55 compute-0 ceph-mon[74335]: pgmap v859: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 10:21:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:56.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:56 compute-0 podman[263701]: 2026-01-23 10:21:56.528268126 +0000 UTC m=+0.052554820 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 10:21:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 97 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 743 KiB/s wr, 85 op/s
Jan 23 10:21:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:21:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:56.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:57 compute-0 ceph-mon[74335]: pgmap v860: 353 pgs: 353 active+clean; 97 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 743 KiB/s wr, 85 op/s
Jan 23 10:21:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:57.772Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:21:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:21:57.772Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:21:57 compute-0 nova_compute[249229]: 2026-01-23 10:21:57.941 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:21:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:21:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:58.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:21:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 108 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 23 10:21:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:21:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:21:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:58.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:21:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102159 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:21:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:21:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:21:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:21:59.775 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:21:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:21:59.776 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:21:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:21:59.776 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:21:59 compute-0 ceph-mon[74335]: pgmap v861: 353 pgs: 353 active+clean; 108 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 23 10:21:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:59] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Jan 23 10:21:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:21:59] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Jan 23 10:22:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:22:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:00.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:22:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 108 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 23 10:22:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:00.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:00 compute-0 nova_compute[249229]: 2026-01-23 10:22:00.891 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:01 compute-0 ceph-mon[74335]: pgmap v862: 353 pgs: 353 active+clean; 108 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 23 10:22:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:02.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 23 10:22:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:02.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:02 compute-0 nova_compute[249229]: 2026-01-23 10:22:02.943 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:03.659Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:03 compute-0 nova_compute[249229]: 2026-01-23 10:22:03.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:22:03 compute-0 nova_compute[249229]: 2026-01-23 10:22:03.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:22:03 compute-0 nova_compute[249229]: 2026-01-23 10:22:03.743 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:22:03 compute-0 nova_compute[249229]: 2026-01-23 10:22:03.744 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:22:03 compute-0 nova_compute[249229]: 2026-01-23 10:22:03.744 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:22:03 compute-0 nova_compute[249229]: 2026-01-23 10:22:03.744 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:22:03 compute-0 nova_compute[249229]: 2026-01-23 10:22:03.744 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:22:03 compute-0 ceph-mon[74335]: pgmap v863: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 23 10:22:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:04.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:22:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1253421413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.167 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.340 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.342 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4552MB free_disk=59.94289016723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.342 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.343 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.435 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.436 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.455 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing inventories for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.486 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating ProviderTree inventory for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.486 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.499 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing aggregate associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.520 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing trait associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 10:22:04 compute-0 nova_compute[249229]: 2026-01-23 10:22:04.547 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:22:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 23 10:22:04 compute-0 sudo[263753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:22:04 compute-0 sudo[263753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:04 compute-0 sudo[263753]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1253421413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:22:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:04.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:22:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615457187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:22:05 compute-0 nova_compute[249229]: 2026-01-23 10:22:05.025 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:22:05 compute-0 nova_compute[249229]: 2026-01-23 10:22:05.031 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:22:05 compute-0 nova_compute[249229]: 2026-01-23 10:22:05.048 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:22:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:22:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:22:05 compute-0 nova_compute[249229]: 2026-01-23 10:22:05.052 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:22:05 compute-0 nova_compute[249229]: 2026-01-23 10:22:05.053 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:22:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:05 compute-0 ceph-mon[74335]: pgmap v864: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 23 10:22:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1615457187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:22:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:22:05 compute-0 nova_compute[249229]: 2026-01-23 10:22:05.893 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:06.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 23 10:22:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:06.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:07 compute-0 nova_compute[249229]: 2026-01-23 10:22:07.054 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:22:07 compute-0 nova_compute[249229]: 2026-01-23 10:22:07.054 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:22:07 compute-0 nova_compute[249229]: 2026-01-23 10:22:07.054 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:22:07 compute-0 nova_compute[249229]: 2026-01-23 10:22:07.069 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:22:07 compute-0 nova_compute[249229]: 2026-01-23 10:22:07.070 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:22:07 compute-0 nova_compute[249229]: 2026-01-23 10:22:07.070 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:22:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe624002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:07 compute-0 nova_compute[249229]: 2026-01-23 10:22:07.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:22:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:22:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:07.773Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:07 compute-0 ceph-mon[74335]: pgmap v865: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 23 10:22:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2351850048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:22:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/748902986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:22:07 compute-0 nova_compute[249229]: 2026-01-23 10:22:07.947 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:08.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 23 10:22:08 compute-0 nova_compute[249229]: 2026-01-23 10:22:08.709 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:22:08 compute-0 nova_compute[249229]: 2026-01-23 10:22:08.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:22:08 compute-0 nova_compute[249229]: 2026-01-23 10:22:08.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:22:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:08.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1160861138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:22:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1277093325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:22:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe624002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:09 compute-0 nova_compute[249229]: 2026-01-23 10:22:09.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:22:09 compute-0 ceph-mon[74335]: pgmap v866: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 23 10:22:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:09] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:22:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:09] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:22:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:10.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 339 KiB/s wr, 30 op/s
Jan 23 10:22:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:10 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:22:10 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:10 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:22:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:10.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:10 compute-0 nova_compute[249229]: 2026-01-23 10:22:10.895 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:11 compute-0 ceph-mon[74335]: pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 339 KiB/s wr, 30 op/s
Jan 23 10:22:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe624003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:12.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 342 KiB/s wr, 32 op/s
Jan 23 10:22:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:12.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:12 compute-0 nova_compute[249229]: 2026-01-23 10:22:12.950 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:13.660Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe624003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:22:13 compute-0 ceph-mon[74335]: pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 342 KiB/s wr, 32 op/s
Jan 23 10:22:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:14.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 15 KiB/s wr, 3 op/s
Jan 23 10:22:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:14.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:15 compute-0 ceph-mon[74335]: pgmap v869: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 15 KiB/s wr, 3 op/s
Jan 23 10:22:15 compute-0 nova_compute[249229]: 2026-01-23 10:22:15.897 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:16.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 16 KiB/s wr, 4 op/s
Jan 23 10:22:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:16.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe624003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:17.774Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:22:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:17.774Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:22:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:17.774Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:22:17 compute-0 ceph-mon[74335]: pgmap v870: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 16 KiB/s wr, 4 op/s
Jan 23 10:22:17 compute-0 nova_compute[249229]: 2026-01-23 10:22:17.952 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:18 compute-0 podman[263812]: 2026-01-23 10:22:18.538609012 +0000 UTC m=+0.074128501 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 10:22:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 3.6 KiB/s wr, 3 op/s
Jan 23 10:22:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:18.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102219 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:22:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe624003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:19 compute-0 ceph-mon[74335]: pgmap v871: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 3.6 KiB/s wr, 3 op/s
Jan 23 10:22:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:19] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:22:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:19] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:22:20
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', '.nfs']
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:22:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:22:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:22:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:20.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007594002327928631 of space, bias 1.0, pg target 0.22782006983785894 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:22:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 3.2 KiB/s wr, 3 op/s
Jan 23 10:22:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:22:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:20.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:20 compute-0 nova_compute[249229]: 2026-01-23 10:22:20.899 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:21 compute-0 ceph-mon[74335]: pgmap v872: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 3.2 KiB/s wr, 3 op/s
Jan 23 10:22:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:22.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 6.6 KiB/s wr, 3 op/s
Jan 23 10:22:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:22.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:22 compute-0 nova_compute[249229]: 2026-01-23 10:22:22.953 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:23.661Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:23 compute-0 ceph-mon[74335]: pgmap v873: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 6.6 KiB/s wr, 3 op/s
Jan 23 10:22:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1943420009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:22:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:24.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:24 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:22:24.263 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:22:24 compute-0 nova_compute[249229]: 2026-01-23 10:22:24.263 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:24 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:22:24.264 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:22:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Jan 23 10:22:24 compute-0 sudo[263847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:22:24 compute-0 sudo[263847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:24 compute-0 sudo[263847]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:24.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:25 compute-0 ceph-mon[74335]: pgmap v874: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Jan 23 10:22:25 compute-0 nova_compute[249229]: 2026-01-23 10:22:25.901 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:26.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 144 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 246 KiB/s wr, 27 op/s
Jan 23 10:22:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:26.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:27 compute-0 ceph-mon[74335]: pgmap v875: 353 pgs: 353 active+clean; 144 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 246 KiB/s wr, 27 op/s
Jan 23 10:22:27 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:22:27.266 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:22:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:27 compute-0 podman[263876]: 2026-01-23 10:22:27.540391624 +0000 UTC m=+0.066265006 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 10:22:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:27.776Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:27 compute-0 nova_compute[249229]: 2026-01-23 10:22:27.957 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:28.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/815013215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:22:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2638325462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:22:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:22:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:28.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:29 compute-0 ceph-mon[74335]: pgmap v876: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:22:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:29] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:22:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:29] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:22:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:30.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:22:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:30.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:30 compute-0 nova_compute[249229]: 2026-01-23 10:22:30.902 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:31 compute-0 ceph-mon[74335]: pgmap v877: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:22:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:32.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 23 10:22:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:32.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:32 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 23 10:22:33 compute-0 nova_compute[249229]: 2026-01-23 10:22:33.002 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:33.663Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:33 compute-0 ceph-mon[74335]: pgmap v878: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 23 10:22:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:34.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:22:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:34.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:22:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:22:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:35 compute-0 ceph-mon[74335]: pgmap v879: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:22:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:22:35 compute-0 nova_compute[249229]: 2026-01-23 10:22:35.904 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:36.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 186 op/s
Jan 23 10:22:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:36.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:37.777Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:37 compute-0 ceph-mon[74335]: pgmap v880: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 186 op/s
Jan 23 10:22:38 compute-0 nova_compute[249229]: 2026-01-23 10:22:38.004 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:22:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:38.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:22:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 207 op/s
Jan 23 10:22:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:38.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:39 compute-0 ceph-mon[74335]: pgmap v881: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 207 op/s
Jan 23 10:22:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:39] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:22:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:39] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:22:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:40.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 206 op/s
Jan 23 10:22:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:40.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:40 compute-0 nova_compute[249229]: 2026-01-23 10:22:40.906 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:41 compute-0 ceph-mon[74335]: pgmap v882: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 206 op/s
Jan 23 10:22:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:42.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 175 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 286 op/s
Jan 23 10:22:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:42.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:43 compute-0 nova_compute[249229]: 2026-01-23 10:22:43.052 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:43.665Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:43 compute-0 ceph-mon[74335]: pgmap v883: 353 pgs: 353 active+clean; 175 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 286 op/s
Jan 23 10:22:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:44.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:44 compute-0 sudo[263914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:22:44 compute-0 sudo[263914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:44 compute-0 sudo[263914]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:44 compute-0 sudo[263939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:22:44 compute-0 sudo[263939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 175 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 1.1 MiB/s wr, 211 op/s
Jan 23 10:22:44 compute-0 sudo[263980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:22:44 compute-0 sudo[263980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:44 compute-0 sudo[263980]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:45 compute-0 sudo[263939]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:22:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:22:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:22:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:22:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:22:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:22:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:22:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:22:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:22:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:22:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:22:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:22:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:22:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:22:45 compute-0 sudo[264022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:22:45 compute-0 sudo[264022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:45 compute-0 sudo[264022]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:45 compute-0 sudo[264047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:22:45 compute-0 sudo[264047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:45 compute-0 podman[264112]: 2026-01-23 10:22:45.778228953 +0000 UTC m=+0.041470517 container create fb30f4dbddbf1f5b0b6a61cdb34792e59615569673ea3cbd3bc519261ead6bb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mahavira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:22:45 compute-0 systemd[1]: Started libpod-conmon-fb30f4dbddbf1f5b0b6a61cdb34792e59615569673ea3cbd3bc519261ead6bb2.scope.
Jan 23 10:22:45 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:22:45 compute-0 ceph-mon[74335]: pgmap v884: 353 pgs: 353 active+clean; 175 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 1.1 MiB/s wr, 211 op/s
Jan 23 10:22:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:22:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:22:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:22:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:22:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:22:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:22:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:22:45 compute-0 podman[264112]: 2026-01-23 10:22:45.853737264 +0000 UTC m=+0.116978858 container init fb30f4dbddbf1f5b0b6a61cdb34792e59615569673ea3cbd3bc519261ead6bb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mahavira, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:22:45 compute-0 podman[264112]: 2026-01-23 10:22:45.759188295 +0000 UTC m=+0.022429889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:22:45 compute-0 podman[264112]: 2026-01-23 10:22:45.861840226 +0000 UTC m=+0.125081820 container start fb30f4dbddbf1f5b0b6a61cdb34792e59615569673ea3cbd3bc519261ead6bb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:22:45 compute-0 podman[264112]: 2026-01-23 10:22:45.866233987 +0000 UTC m=+0.129475571 container attach fb30f4dbddbf1f5b0b6a61cdb34792e59615569673ea3cbd3bc519261ead6bb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:22:45 compute-0 focused_mahavira[264128]: 167 167
Jan 23 10:22:45 compute-0 systemd[1]: libpod-fb30f4dbddbf1f5b0b6a61cdb34792e59615569673ea3cbd3bc519261ead6bb2.scope: Deactivated successfully.
Jan 23 10:22:45 compute-0 podman[264112]: 2026-01-23 10:22:45.869418912 +0000 UTC m=+0.132660466 container died fb30f4dbddbf1f5b0b6a61cdb34792e59615569673ea3cbd3bc519261ead6bb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mahavira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a50ed7b20aae20b39b55eaee12ceec246848ebccb112a804e73b6e2217fa9ed-merged.mount: Deactivated successfully.
Jan 23 10:22:45 compute-0 nova_compute[249229]: 2026-01-23 10:22:45.908 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:45 compute-0 podman[264112]: 2026-01-23 10:22:45.912804195 +0000 UTC m=+0.176045759 container remove fb30f4dbddbf1f5b0b6a61cdb34792e59615569673ea3cbd3bc519261ead6bb2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:22:45 compute-0 systemd[1]: libpod-conmon-fb30f4dbddbf1f5b0b6a61cdb34792e59615569673ea3cbd3bc519261ead6bb2.scope: Deactivated successfully.
Jan 23 10:22:46 compute-0 podman[264152]: 2026-01-23 10:22:46.06794168 +0000 UTC m=+0.042181278 container create ed8fcc2fab47d614c8f11a313794986fadeded0f72ea9e0c1e6f32bad2cd8c5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_snyder, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:22:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:46.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:46 compute-0 systemd[1]: Started libpod-conmon-ed8fcc2fab47d614c8f11a313794986fadeded0f72ea9e0c1e6f32bad2cd8c5d.scope.
Jan 23 10:22:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1035203eb51ba2abc1099e537d73ecfac325e570a0e53c57f10096958a00f63d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1035203eb51ba2abc1099e537d73ecfac325e570a0e53c57f10096958a00f63d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1035203eb51ba2abc1099e537d73ecfac325e570a0e53c57f10096958a00f63d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1035203eb51ba2abc1099e537d73ecfac325e570a0e53c57f10096958a00f63d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1035203eb51ba2abc1099e537d73ecfac325e570a0e53c57f10096958a00f63d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:46 compute-0 podman[264152]: 2026-01-23 10:22:46.050608614 +0000 UTC m=+0.024848222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:22:46 compute-0 podman[264152]: 2026-01-23 10:22:46.155035537 +0000 UTC m=+0.129275205 container init ed8fcc2fab47d614c8f11a313794986fadeded0f72ea9e0c1e6f32bad2cd8c5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_snyder, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:22:46 compute-0 podman[264152]: 2026-01-23 10:22:46.162719696 +0000 UTC m=+0.136959294 container start ed8fcc2fab47d614c8f11a313794986fadeded0f72ea9e0c1e6f32bad2cd8c5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 10:22:46 compute-0 podman[264152]: 2026-01-23 10:22:46.166103117 +0000 UTC m=+0.140342785 container attach ed8fcc2fab47d614c8f11a313794986fadeded0f72ea9e0c1e6f32bad2cd8c5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 10:22:46 compute-0 relaxed_snyder[264169]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:22:46 compute-0 relaxed_snyder[264169]: --> All data devices are unavailable
Jan 23 10:22:46 compute-0 systemd[1]: libpod-ed8fcc2fab47d614c8f11a313794986fadeded0f72ea9e0c1e6f32bad2cd8c5d.scope: Deactivated successfully.
Jan 23 10:22:46 compute-0 podman[264152]: 2026-01-23 10:22:46.506490665 +0000 UTC m=+0.480730253 container died ed8fcc2fab47d614c8f11a313794986fadeded0f72ea9e0c1e6f32bad2cd8c5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1035203eb51ba2abc1099e537d73ecfac325e570a0e53c57f10096958a00f63d-merged.mount: Deactivated successfully.
Jan 23 10:22:46 compute-0 podman[264152]: 2026-01-23 10:22:46.549727984 +0000 UTC m=+0.523967572 container remove ed8fcc2fab47d614c8f11a313794986fadeded0f72ea9e0c1e6f32bad2cd8c5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:22:46 compute-0 systemd[1]: libpod-conmon-ed8fcc2fab47d614c8f11a313794986fadeded0f72ea9e0c1e6f32bad2cd8c5d.scope: Deactivated successfully.
Jan 23 10:22:46 compute-0 sudo[264047]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:46 compute-0 sudo[264197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:22:46 compute-0 sudo[264197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:46 compute-0 sudo[264197]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 199 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 460 KiB/s rd, 2.1 MiB/s wr, 243 op/s
Jan 23 10:22:46 compute-0 sudo[264222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:22:46 compute-0 sudo[264222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:47 compute-0 podman[264289]: 2026-01-23 10:22:47.111932475 +0000 UTC m=+0.038883560 container create 25945abdc57e73ce1fdcacb7d4b6fdf83e410a9464058bb756eddcb704fd3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 23 10:22:47 compute-0 systemd[1]: Started libpod-conmon-25945abdc57e73ce1fdcacb7d4b6fdf83e410a9464058bb756eddcb704fd3442.scope.
Jan 23 10:22:47 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:22:47 compute-0 podman[264289]: 2026-01-23 10:22:47.094512706 +0000 UTC m=+0.021463811 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:22:47 compute-0 podman[264289]: 2026-01-23 10:22:47.201939899 +0000 UTC m=+0.128890974 container init 25945abdc57e73ce1fdcacb7d4b6fdf83e410a9464058bb756eddcb704fd3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:22:47 compute-0 podman[264289]: 2026-01-23 10:22:47.208862455 +0000 UTC m=+0.135813570 container start 25945abdc57e73ce1fdcacb7d4b6fdf83e410a9464058bb756eddcb704fd3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatterjee, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:22:47 compute-0 podman[264289]: 2026-01-23 10:22:47.212967968 +0000 UTC m=+0.139919043 container attach 25945abdc57e73ce1fdcacb7d4b6fdf83e410a9464058bb756eddcb704fd3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:22:47 compute-0 jovial_chatterjee[264305]: 167 167
Jan 23 10:22:47 compute-0 podman[264289]: 2026-01-23 10:22:47.216187484 +0000 UTC m=+0.143138559 container died 25945abdc57e73ce1fdcacb7d4b6fdf83e410a9464058bb756eddcb704fd3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 10:22:47 compute-0 systemd[1]: libpod-25945abdc57e73ce1fdcacb7d4b6fdf83e410a9464058bb756eddcb704fd3442.scope: Deactivated successfully.
Jan 23 10:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd02773dcd1881e18a467bda7a3629c56ee4cf8098523ad7557d87a79bb071ba-merged.mount: Deactivated successfully.
Jan 23 10:22:47 compute-0 podman[264289]: 2026-01-23 10:22:47.249017532 +0000 UTC m=+0.175968647 container remove 25945abdc57e73ce1fdcacb7d4b6fdf83e410a9464058bb756eddcb704fd3442 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_chatterjee, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 23 10:22:47 compute-0 systemd[1]: libpod-conmon-25945abdc57e73ce1fdcacb7d4b6fdf83e410a9464058bb756eddcb704fd3442.scope: Deactivated successfully.
Jan 23 10:22:47 compute-0 podman[264329]: 2026-01-23 10:22:47.410005252 +0000 UTC m=+0.037356685 container create 5dff1133cf3067395cbec082ad6ed7ea4dde7b5eedf79d4bfa8de35b0a04015a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:22:47 compute-0 systemd[1]: Started libpod-conmon-5dff1133cf3067395cbec082ad6ed7ea4dde7b5eedf79d4bfa8de35b0a04015a.scope.
Jan 23 10:22:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:47 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106ef93117072d15d805f9fdc6e62367b0097242d4f51d6bda0579ce4c454e66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106ef93117072d15d805f9fdc6e62367b0097242d4f51d6bda0579ce4c454e66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106ef93117072d15d805f9fdc6e62367b0097242d4f51d6bda0579ce4c454e66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106ef93117072d15d805f9fdc6e62367b0097242d4f51d6bda0579ce4c454e66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:47 compute-0 podman[264329]: 2026-01-23 10:22:47.395030045 +0000 UTC m=+0.022381498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:22:47 compute-0 podman[264329]: 2026-01-23 10:22:47.491281925 +0000 UTC m=+0.118633388 container init 5dff1133cf3067395cbec082ad6ed7ea4dde7b5eedf79d4bfa8de35b0a04015a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 10:22:47 compute-0 podman[264329]: 2026-01-23 10:22:47.49815437 +0000 UTC m=+0.125505803 container start 5dff1133cf3067395cbec082ad6ed7ea4dde7b5eedf79d4bfa8de35b0a04015a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 10:22:47 compute-0 podman[264329]: 2026-01-23 10:22:47.501792208 +0000 UTC m=+0.129143651 container attach 5dff1133cf3067395cbec082ad6ed7ea4dde7b5eedf79d4bfa8de35b0a04015a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 10:22:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:47.778Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:47 compute-0 dazzling_cray[264346]: {
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:     "1": [
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:         {
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "devices": [
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "/dev/loop3"
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             ],
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "lv_name": "ceph_lv0",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "lv_size": "21470642176",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "name": "ceph_lv0",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "tags": {
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.cluster_name": "ceph",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.crush_device_class": "",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.encrypted": "0",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.osd_id": "1",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.type": "block",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.vdo": "0",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:                 "ceph.with_tpm": "0"
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             },
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "type": "block",
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:             "vg_name": "ceph_vg0"
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:         }
Jan 23 10:22:47 compute-0 dazzling_cray[264346]:     ]
Jan 23 10:22:47 compute-0 dazzling_cray[264346]: }
Jan 23 10:22:47 compute-0 systemd[1]: libpod-5dff1133cf3067395cbec082ad6ed7ea4dde7b5eedf79d4bfa8de35b0a04015a.scope: Deactivated successfully.
Jan 23 10:22:47 compute-0 podman[264329]: 2026-01-23 10:22:47.807472032 +0000 UTC m=+0.434823485 container died 5dff1133cf3067395cbec082ad6ed7ea4dde7b5eedf79d4bfa8de35b0a04015a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-106ef93117072d15d805f9fdc6e62367b0097242d4f51d6bda0579ce4c454e66-merged.mount: Deactivated successfully.
Jan 23 10:22:47 compute-0 podman[264329]: 2026-01-23 10:22:47.844071413 +0000 UTC m=+0.471422846 container remove 5dff1133cf3067395cbec082ad6ed7ea4dde7b5eedf79d4bfa8de35b0a04015a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 10:22:47 compute-0 systemd[1]: libpod-conmon-5dff1133cf3067395cbec082ad6ed7ea4dde7b5eedf79d4bfa8de35b0a04015a.scope: Deactivated successfully.
Jan 23 10:22:47 compute-0 sudo[264222]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:47 compute-0 ceph-mon[74335]: pgmap v885: 353 pgs: 353 active+clean; 199 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 460 KiB/s rd, 2.1 MiB/s wr, 243 op/s
Jan 23 10:22:47 compute-0 sudo[264367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:22:47 compute-0 sudo[264367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:47 compute-0 sudo[264367]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:48 compute-0 sudo[264392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:22:48 compute-0 sudo[264392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:48 compute-0 nova_compute[249229]: 2026-01-23 10:22:48.054 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:48.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:48 compute-0 podman[264457]: 2026-01-23 10:22:48.402400908 +0000 UTC m=+0.034911922 container create 294ba31f5dd85fabb84e39fb35a7c572cde4dae7c5958fd630e3361365249a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 10:22:48 compute-0 systemd[1]: Started libpod-conmon-294ba31f5dd85fabb84e39fb35a7c572cde4dae7c5958fd630e3361365249a47.scope.
Jan 23 10:22:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:22:48 compute-0 podman[264457]: 2026-01-23 10:22:48.460858471 +0000 UTC m=+0.093369485 container init 294ba31f5dd85fabb84e39fb35a7c572cde4dae7c5958fd630e3361365249a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sinoussi, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:22:48 compute-0 podman[264457]: 2026-01-23 10:22:48.471267091 +0000 UTC m=+0.103778105 container start 294ba31f5dd85fabb84e39fb35a7c572cde4dae7c5958fd630e3361365249a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sinoussi, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:22:48 compute-0 quizzical_sinoussi[264474]: 167 167
Jan 23 10:22:48 compute-0 systemd[1]: libpod-294ba31f5dd85fabb84e39fb35a7c572cde4dae7c5958fd630e3361365249a47.scope: Deactivated successfully.
Jan 23 10:22:48 compute-0 podman[264457]: 2026-01-23 10:22:48.474791786 +0000 UTC m=+0.107302800 container attach 294ba31f5dd85fabb84e39fb35a7c572cde4dae7c5958fd630e3361365249a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 10:22:48 compute-0 podman[264457]: 2026-01-23 10:22:48.475083495 +0000 UTC m=+0.107594499 container died 294ba31f5dd85fabb84e39fb35a7c572cde4dae7c5958fd630e3361365249a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sinoussi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 10:22:48 compute-0 podman[264457]: 2026-01-23 10:22:48.388360649 +0000 UTC m=+0.020871683 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b12d7a548e646c96dbaac128d9dd6f27fbf85b6e0bb8b052e07ed049bd6df7ac-merged.mount: Deactivated successfully.
Jan 23 10:22:48 compute-0 podman[264457]: 2026-01-23 10:22:48.507911973 +0000 UTC m=+0.140422987 container remove 294ba31f5dd85fabb84e39fb35a7c572cde4dae7c5958fd630e3361365249a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sinoussi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:22:48 compute-0 systemd[1]: libpod-conmon-294ba31f5dd85fabb84e39fb35a7c572cde4dae7c5958fd630e3361365249a47.scope: Deactivated successfully.
Jan 23 10:22:48 compute-0 podman[264499]: 2026-01-23 10:22:48.669329506 +0000 UTC m=+0.041122407 container create 019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:22:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.1 MiB/s wr, 162 op/s
Jan 23 10:22:48 compute-0 systemd[1]: Started libpod-conmon-019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c.scope.
Jan 23 10:22:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8989bd9193c2296970f14c13f4d4474797616886b7bc0b531a78b08f5d03317a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8989bd9193c2296970f14c13f4d4474797616886b7bc0b531a78b08f5d03317a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8989bd9193c2296970f14c13f4d4474797616886b7bc0b531a78b08f5d03317a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8989bd9193c2296970f14c13f4d4474797616886b7bc0b531a78b08f5d03317a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:22:48 compute-0 podman[264499]: 2026-01-23 10:22:48.727790559 +0000 UTC m=+0.099583480 container init 019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_maxwell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:22:48 compute-0 podman[264499]: 2026-01-23 10:22:48.735786237 +0000 UTC m=+0.107579138 container start 019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 10:22:48 compute-0 podman[264499]: 2026-01-23 10:22:48.64967249 +0000 UTC m=+0.021465411 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:22:48 compute-0 podman[264499]: 2026-01-23 10:22:48.745839247 +0000 UTC m=+0.117632178 container attach 019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_maxwell, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:22:48 compute-0 podman[264513]: 2026-01-23 10:22:48.805437534 +0000 UTC m=+0.099342463 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 10:22:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2595657408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:22:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2595657408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:22:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:48.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:49 compute-0 lvm[264618]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:22:49 compute-0 lvm[264618]: VG ceph_vg0 finished
Jan 23 10:22:49 compute-0 laughing_maxwell[264516]: {}
Jan 23 10:22:49 compute-0 systemd[1]: libpod-019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c.scope: Deactivated successfully.
Jan 23 10:22:49 compute-0 systemd[1]: libpod-019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c.scope: Consumed 1.110s CPU time.
Jan 23 10:22:49 compute-0 podman[264499]: 2026-01-23 10:22:49.453313649 +0000 UTC m=+0.825106550 container died 019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:22:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8989bd9193c2296970f14c13f4d4474797616886b7bc0b531a78b08f5d03317a-merged.mount: Deactivated successfully.
Jan 23 10:22:49 compute-0 podman[264499]: 2026-01-23 10:22:49.494178697 +0000 UTC m=+0.865971598 container remove 019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 10:22:49 compute-0 systemd[1]: libpod-conmon-019ab0b8870b2a912c19c1d288cfb33820404e7f0d14808845a93befa49c9c1c.scope: Deactivated successfully.
Jan 23 10:22:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:49 compute-0 sudo[264392]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:22:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:22:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:22:49 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:22:49 compute-0 sudo[264632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:22:49 compute-0 sudo[264632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:22:49 compute-0 sudo[264632]: pam_unix(sudo:session): session closed for user root
Jan 23 10:22:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:49 compute-0 ceph-mon[74335]: pgmap v886: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.1 MiB/s wr, 162 op/s
Jan 23 10:22:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:22:49 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:22:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:49] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:22:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:49] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:22:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:22:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:22:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:50.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:22:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:22:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:22:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:22:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:22:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:22:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 23 10:22:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:50.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:50 compute-0 nova_compute[249229]: 2026-01-23 10:22:50.910 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:22:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102251 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:22:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:51 compute-0 ceph-mon[74335]: pgmap v887: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 23 10:22:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:52.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 417 KiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 23 10:22:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:22:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:52.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:22:52 compute-0 ceph-mon[74335]: pgmap v888: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 417 KiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 23 10:22:53 compute-0 nova_compute[249229]: 2026-01-23 10:22:53.055 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:53.666Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:22:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:53.666Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:22:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:54.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 1.0 MiB/s wr, 35 op/s
Jan 23 10:22:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:54.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:55 compute-0 ceph-mon[74335]: pgmap v889: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 1.0 MiB/s wr, 35 op/s
Jan 23 10:22:55 compute-0 nova_compute[249229]: 2026-01-23 10:22:55.911 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:56.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 1.0 MiB/s wr, 35 op/s
Jan 23 10:22:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:22:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:22:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:56.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:22:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:57 compute-0 ceph-mon[74335]: pgmap v890: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 1.0 MiB/s wr, 35 op/s
Jan 23 10:22:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:22:57.778Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:22:58 compute-0 nova_compute[249229]: 2026-01-23 10:22:58.058 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:22:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:22:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:58.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:22:58 compute-0 podman[264670]: 2026-01-23 10:22:58.522861503 +0000 UTC m=+0.052289480 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 10:22:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 173 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 18 KiB/s wr, 6 op/s
Jan 23 10:22:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:22:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:22:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:58.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:22:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644001220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:22:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:22:59.776 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:22:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:22:59.776 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:22:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:22:59.776 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:22:59 compute-0 ceph-mon[74335]: pgmap v891: 353 pgs: 353 active+clean; 173 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 18 KiB/s wr, 6 op/s
Jan 23 10:22:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:22:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:22:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:59] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:22:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:22:59] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:23:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:00.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 173 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 23 10:23:00 compute-0 nova_compute[249229]: 2026-01-23 10:23:00.912 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:23:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:00.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:23:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6440013c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:01 compute-0 ceph-mon[74335]: pgmap v892: 353 pgs: 353 active+clean; 173 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 23 10:23:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2480691247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:02.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 30 op/s
Jan 23 10:23:02 compute-0 nova_compute[249229]: 2026-01-23 10:23:02.709 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:23:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:02.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:23:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:02 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:23:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:02 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:23:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:02 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:23:03 compute-0 nova_compute[249229]: 2026-01-23 10:23:03.061 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:03.667Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:03 compute-0 nova_compute[249229]: 2026-01-23 10:23:03.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:03 compute-0 ceph-mon[74335]: pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 30 op/s
Jan 23 10:23:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:04.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 8.4 KiB/s wr, 30 op/s
Jan 23 10:23:04 compute-0 sudo[264697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:23:04 compute-0 sudo[264697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:04 compute-0 sudo[264697]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:04.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:23:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:23:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644002620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6340023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:05 compute-0 nova_compute[249229]: 2026-01-23 10:23:05.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:05 compute-0 nova_compute[249229]: 2026-01-23 10:23:05.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:23:05 compute-0 nova_compute[249229]: 2026-01-23 10:23:05.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:05 compute-0 nova_compute[249229]: 2026-01-23 10:23:05.741 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:05 compute-0 nova_compute[249229]: 2026-01-23 10:23:05.742 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:05 compute-0 nova_compute[249229]: 2026-01-23 10:23:05.742 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:05 compute-0 nova_compute[249229]: 2026-01-23 10:23:05.742 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:23:05 compute-0 nova_compute[249229]: 2026-01-23 10:23:05.742 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:05 compute-0 ceph-mon[74335]: pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 8.4 KiB/s wr, 30 op/s
Jan 23 10:23:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:23:05 compute-0 nova_compute[249229]: 2026-01-23 10:23:05.914 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:23:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:06.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:23:06 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596926985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.208 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.356 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.357 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4568MB free_disk=59.942562103271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.357 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.357 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.424 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.424 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.441 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 8.8 KiB/s wr, 32 op/s
Jan 23 10:23:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:23:06 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002394447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.877 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:06 compute-0 nova_compute[249229]: 2026-01-23 10:23:06.882 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:23:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:06.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1596926985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:07 compute-0 nova_compute[249229]: 2026-01-23 10:23:07.369 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:23:07 compute-0 nova_compute[249229]: 2026-01-23 10:23:07.370 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:23:07 compute-0 nova_compute[249229]: 2026-01-23 10:23:07.370 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6340023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:07.779Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:23:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:07.779Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:08 compute-0 nova_compute[249229]: 2026-01-23 10:23:08.064 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:08 compute-0 ceph-mon[74335]: pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 8.8 KiB/s wr, 32 op/s
Jan 23 10:23:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2002394447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:23:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:08.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:23:08 compute-0 nova_compute[249229]: 2026-01-23 10:23:08.371 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:08 compute-0 nova_compute[249229]: 2026-01-23 10:23:08.371 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:23:08 compute-0 nova_compute[249229]: 2026-01-23 10:23:08.371 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:23:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Jan 23 10:23:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:23:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:08.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:23:09 compute-0 nova_compute[249229]: 2026-01-23 10:23:09.077 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:23:09 compute-0 nova_compute[249229]: 2026-01-23 10:23:09.077 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:09 compute-0 ceph-mon[74335]: pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Jan 23 10:23:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/644210232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644002620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:09 compute-0 nova_compute[249229]: 2026-01-23 10:23:09.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:09 compute-0 nova_compute[249229]: 2026-01-23 10:23:09.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:09] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:23:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:09] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:23:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:10.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1432565687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1488791737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Jan 23 10:23:10 compute-0 nova_compute[249229]: 2026-01-23 10:23:10.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:10 compute-0 nova_compute[249229]: 2026-01-23 10:23:10.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:23:10 compute-0 nova_compute[249229]: 2026-01-23 10:23:10.916 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:10.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:11 compute-0 ceph-mon[74335]: pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Jan 23 10:23:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102311 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:23:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:12.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/721921800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 80 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.2 KiB/s wr, 35 op/s
Jan 23 10:23:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:12.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:13 compute-0 nova_compute[249229]: 2026-01-23 10:23:13.067 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:13 compute-0 ceph-mon[74335]: pgmap v898: 353 pgs: 353 active+clean; 80 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.2 KiB/s wr, 35 op/s
Jan 23 10:23:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:13.669Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:23:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:13.670Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:23:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:14.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 80 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 2.4 KiB/s wr, 7 op/s
Jan 23 10:23:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1573239966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:14.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:15 compute-0 ceph-mon[74335]: pgmap v899: 353 pgs: 353 active+clean; 80 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 2.4 KiB/s wr, 7 op/s
Jan 23 10:23:15 compute-0 nova_compute[249229]: 2026-01-23 10:23:15.918 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:16.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Jan 23 10:23:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:16.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:17.780Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:17 compute-0 ceph-mon[74335]: pgmap v900: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Jan 23 10:23:18 compute-0 nova_compute[249229]: 2026-01-23 10:23:18.070 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:18.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 KiB/s wr, 28 op/s
Jan 23 10:23:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:18.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:19 compute-0 ceph-mon[74335]: pgmap v901: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 KiB/s wr, 28 op/s
Jan 23 10:23:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:19 compute-0 podman[264780]: 2026-01-23 10:23:19.579409708 +0000 UTC m=+0.114032711 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 10:23:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:19] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:23:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:19] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:23:20
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.mgr', 'images', '.rgw.root', 'vms', 'default.rgw.log', '.nfs', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:23:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:23:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:23:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:23:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:20.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:23:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:23:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 KiB/s wr, 28 op/s
Jan 23 10:23:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:20.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:20 compute-0 nova_compute[249229]: 2026-01-23 10:23:20.954 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:21 compute-0 ceph-mon[74335]: pgmap v902: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 KiB/s wr, 28 op/s
Jan 23 10:23:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 KiB/s wr, 28 op/s
Jan 23 10:23:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:22.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:23 compute-0 nova_compute[249229]: 2026-01-23 10:23:23.071 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:23.672Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:23 compute-0 ceph-mon[74335]: pgmap v903: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 KiB/s wr, 28 op/s
Jan 23 10:23:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:24.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Jan 23 10:23:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:23:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:24.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:23:25 compute-0 sudo[264813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:23:25 compute-0 sudo[264813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:25 compute-0 sudo[264813]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:25 compute-0 ceph-mon[74335]: pgmap v904: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Jan 23 10:23:25 compute-0 nova_compute[249229]: 2026-01-23 10:23:25.956 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:26.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 853 B/s wr, 22 op/s
Jan 23 10:23:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:26.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:27.782Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:27 compute-0 ceph-mon[74335]: pgmap v905: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 853 B/s wr, 22 op/s
Jan 23 10:23:28 compute-0 nova_compute[249229]: 2026-01-23 10:23:28.073 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:28.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:23:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:28.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:29 compute-0 podman[264842]: 2026-01-23 10:23:29.524133391 +0000 UTC m=+0.051641281 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 10:23:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:29 compute-0 ceph-mon[74335]: pgmap v906: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:23:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:29] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Jan 23 10:23:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:29] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Jan 23 10:23:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:30.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:30.517 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:23:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:30.518 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:23:30 compute-0 nova_compute[249229]: 2026-01-23 10:23:30.519 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:23:30 compute-0 nova_compute[249229]: 2026-01-23 10:23:30.958 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:32 compute-0 ceph-mon[74335]: pgmap v907: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:23:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:23:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102332 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:23:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:32.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:33 compute-0 nova_compute[249229]: 2026-01-23 10:23:33.074 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:33 compute-0 ceph-mon[74335]: pgmap v908: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:23:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:33.673Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:34.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:23:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:23:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:34.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:23:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:23:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:23:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:35 compute-0 ceph-mon[74335]: pgmap v909: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 23 10:23:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:23:35 compute-0 nova_compute[249229]: 2026-01-23 10:23:35.959 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:36.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:23:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:36.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:37 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:37.520 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:23:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:37.783Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:23:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:37.783Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:38 compute-0 nova_compute[249229]: 2026-01-23 10:23:38.076 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:23:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:38.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:23:38 compute-0 ceph-mon[74335]: pgmap v910: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 23 10:23:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:23:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:39 compute-0 ceph-mon[74335]: pgmap v911: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:23:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:39] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:23:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:39] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:23:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:40.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:23:40 compute-0 nova_compute[249229]: 2026-01-23 10:23:40.961 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:41 compute-0 nova_compute[249229]: 2026-01-23 10:23:41.437 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:41 compute-0 nova_compute[249229]: 2026-01-23 10:23:41.438 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:41 compute-0 nova_compute[249229]: 2026-01-23 10:23:41.454 249233 DEBUG nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 10:23:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:41 compute-0 nova_compute[249229]: 2026-01-23 10:23:41.528 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:41 compute-0 nova_compute[249229]: 2026-01-23 10:23:41.528 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:41 compute-0 nova_compute[249229]: 2026-01-23 10:23:41.535 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 10:23:41 compute-0 nova_compute[249229]: 2026-01-23 10:23:41.536 249233 INFO nova.compute.claims [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Claim successful on node compute-0.ctlplane.example.com
Jan 23 10:23:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:41 compute-0 nova_compute[249229]: 2026-01-23 10:23:41.647 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:41 compute-0 ceph-mon[74335]: pgmap v912: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:23:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.866579) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163821866711, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1491, "num_deletes": 505, "total_data_size": 2197667, "memory_usage": 2225232, "flush_reason": "Manual Compaction"}
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163821877589, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 1383209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27077, "largest_seqno": 28567, "table_properties": {"data_size": 1377972, "index_size": 2057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16001, "raw_average_key_size": 19, "raw_value_size": 1364717, "raw_average_value_size": 1630, "num_data_blocks": 90, "num_entries": 837, "num_filter_entries": 837, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163706, "oldest_key_time": 1769163706, "file_creation_time": 1769163821, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 11090 microseconds, and 4609 cpu microseconds.
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.877669) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 1383209 bytes OK
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.877705) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.879923) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.879945) EVENT_LOG_v1 {"time_micros": 1769163821879938, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.879968) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2190217, prev total WAL file size 2190217, number of live WAL files 2.
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.881188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353130' seq:72057594037927935, type:22 .. '6C6F676D00373631' seq:0, type:0; will stop at (end)
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(1350KB)], [59(14MB)]
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163821881427, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 16235132, "oldest_snapshot_seqno": -1}
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5781 keys, 12761204 bytes, temperature: kUnknown
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163821963375, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 12761204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12723841, "index_size": 21829, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149206, "raw_average_key_size": 25, "raw_value_size": 12620390, "raw_average_value_size": 2183, "num_data_blocks": 877, "num_entries": 5781, "num_filter_entries": 5781, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769163821, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.963750) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 12761204 bytes
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.965402) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.8 rd, 155.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.2 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(21.0) write-amplify(9.2) OK, records in: 6759, records dropped: 978 output_compression: NoCompression
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.965423) EVENT_LOG_v1 {"time_micros": 1769163821965413, "job": 32, "event": "compaction_finished", "compaction_time_micros": 82093, "compaction_time_cpu_micros": 47321, "output_level": 6, "num_output_files": 1, "total_output_size": 12761204, "num_input_records": 6759, "num_output_records": 5781, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163821966051, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163821968995, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.881043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.969231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.969243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.969248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.969252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:23:41 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:23:41.969256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:23:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:23:42 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2132895342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.110 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.116 249233 DEBUG nova.compute.provider_tree [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.134 249233 DEBUG nova.scheduler.client.report [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.162 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.163 249233 DEBUG nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 10:23:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:42.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.208 249233 DEBUG nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.208 249233 DEBUG nova.network.neutron [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.231 249233 INFO nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.253 249233 DEBUG nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.352 249233 DEBUG nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.353 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.354 249233 INFO nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Creating image(s)
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.382 249233 DEBUG nova.storage.rbd_utils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.408 249233 DEBUG nova.storage.rbd_utils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.433 249233 DEBUG nova.storage.rbd_utils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.436 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.496 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.497 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "379b2821245bc82aa5a95839eddb9a97716b559c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.498 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "379b2821245bc82aa5a95839eddb9a97716b559c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.498 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "379b2821245bc82aa5a95839eddb9a97716b559c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.530 249233 DEBUG nova.storage.rbd_utils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.534 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:42 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:23:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.719 249233 DEBUG nova.policy [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f459c4e71e6c47acb0f8aaf83f34695e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.865 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:42 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2132895342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:42 compute-0 nova_compute[249229]: 2026-01-23 10:23:42.959 249233 DEBUG nova.storage.rbd_utils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] resizing rbd image 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 23 10:23:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.075 249233 DEBUG nova.objects.instance [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'migration_context' on Instance uuid 63ed4545-8ad4-406e-be3b-3aaafb68fbcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.078 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.092 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.093 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Ensure instance console log exists: /var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.093 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.094 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.094 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:43.674Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:23:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:43.675Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.709 249233 DEBUG nova.network.neutron [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Successfully updated port: d744a552-c706-444a-8a15-4a98c41eed50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.724 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "refresh_cache-63ed4545-8ad4-406e-be3b-3aaafb68fbcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.724 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquired lock "refresh_cache-63ed4545-8ad4-406e-be3b-3aaafb68fbcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.724 249233 DEBUG nova.network.neutron [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 10:23:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.828 249233 DEBUG nova.compute.manager [req-631b04f3-1e24-4070-9157-a92c68c41626 req-deb1ef3e-5d1e-4430-9547-6fbc9475cd54 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Received event network-changed-d744a552-c706-444a-8a15-4a98c41eed50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.828 249233 DEBUG nova.compute.manager [req-631b04f3-1e24-4070-9157-a92c68c41626 req-deb1ef3e-5d1e-4430-9547-6fbc9475cd54 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Refreshing instance network info cache due to event network-changed-d744a552-c706-444a-8a15-4a98c41eed50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.828 249233 DEBUG oslo_concurrency.lockutils [req-631b04f3-1e24-4070-9157-a92c68c41626 req-deb1ef3e-5d1e-4430-9547-6fbc9475cd54 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-63ed4545-8ad4-406e-be3b-3aaafb68fbcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:23:43 compute-0 ceph-mon[74335]: pgmap v913: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:23:43 compute-0 nova_compute[249229]: 2026-01-23 10:23:43.942 249233 DEBUG nova.network.neutron [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 10:23:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:44.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.746 249233 DEBUG nova.network.neutron [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Updating instance_info_cache with network_info: [{"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.772 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Releasing lock "refresh_cache-63ed4545-8ad4-406e-be3b-3aaafb68fbcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.773 249233 DEBUG nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Instance network_info: |[{"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.773 249233 DEBUG oslo_concurrency.lockutils [req-631b04f3-1e24-4070-9157-a92c68c41626 req-deb1ef3e-5d1e-4430-9547-6fbc9475cd54 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-63ed4545-8ad4-406e-be3b-3aaafb68fbcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.773 249233 DEBUG nova.network.neutron [req-631b04f3-1e24-4070-9157-a92c68c41626 req-deb1ef3e-5d1e-4430-9547-6fbc9475cd54 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Refreshing network info cache for port d744a552-c706-444a-8a15-4a98c41eed50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.776 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Start _get_guest_xml network_info=[{"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T10:15:36Z,direct_url=<?>,disk_format='qcow2',id=271ec98e-d058-421b-bbfb-4b4a5954c90a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5220cd4f58cb43bb899e367e961bc5c1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T10:15:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'size': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '271ec98e-d058-421b-bbfb-4b4a5954c90a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.782 249233 WARNING nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.788 249233 DEBUG nova.virt.libvirt.host [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.789 249233 DEBUG nova.virt.libvirt.host [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.793 249233 DEBUG nova.virt.libvirt.host [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.794 249233 DEBUG nova.virt.libvirt.host [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.794 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.794 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T10:15:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1d8c8bf4-786e-4009-bc53-f259480fb5b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T10:15:36Z,direct_url=<?>,disk_format='qcow2',id=271ec98e-d058-421b-bbfb-4b4a5954c90a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5220cd4f58cb43bb899e367e961bc5c1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T10:15:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.795 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.795 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.795 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.796 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.796 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.796 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.797 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.797 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.797 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.798 249233 DEBUG nova.virt.hardware [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 10:23:44 compute-0 nova_compute[249229]: 2026-01-23 10:23:44.801 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:45 compute-0 sudo[265089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:23:45 compute-0 sudo[265089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:45 compute-0 sudo[265089]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 10:23:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313488858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.253 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.284 249233 DEBUG nova.storage.rbd_utils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.288 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:23:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:23:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 10:23:45 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984605226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.739 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.741 249233 DEBUG nova.virt.libvirt.vif [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-887663066',display_name='tempest-TestNetworkBasicOps-server-887663066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-887663066',id=8,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA/aI2vPj8RkvZIXg0qwsg+mZSpAN4KYb+jWWGi7brg+su0APA02U+0u4zmFgnmB6GMhllEQLzjYT+6n6+qiaS4xy7JGGjDUIERWMZ9GUsTtnQNtbkViktpWv9cmVqG8aA==',key_name='tempest-TestNetworkBasicOps-1244907344',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-ed9ze0ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:23:42Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=63ed4545-8ad4-406e-be3b-3aaafb68fbcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.742 249233 DEBUG nova.network.os_vif_util [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.743 249233 DEBUG nova.network.os_vif_util [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:48:6d,bridge_name='br-int',has_traffic_filtering=True,id=d744a552-c706-444a-8a15-4a98c41eed50,network=Network(2fb57e44-e877-47c8-860b-b36d5b5ff599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd744a552-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.745 249233 DEBUG nova.objects.instance [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'pci_devices' on Instance uuid 63ed4545-8ad4-406e-be3b-3aaafb68fbcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:23:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.762 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] End _get_guest_xml xml=<domain type="kvm">
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <uuid>63ed4545-8ad4-406e-be3b-3aaafb68fbcc</uuid>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <name>instance-00000008</name>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <memory>131072</memory>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <vcpu>1</vcpu>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <metadata>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <nova:name>tempest-TestNetworkBasicOps-server-887663066</nova:name>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <nova:creationTime>2026-01-23 10:23:44</nova:creationTime>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <nova:flavor name="m1.nano">
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <nova:memory>128</nova:memory>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <nova:disk>1</nova:disk>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <nova:swap>0</nova:swap>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <nova:ephemeral>0</nova:ephemeral>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <nova:vcpus>1</nova:vcpus>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       </nova:flavor>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <nova:owner>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <nova:user uuid="f459c4e71e6c47acb0f8aaf83f34695e">tempest-TestNetworkBasicOps-655467240-project-member</nova:user>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <nova:project uuid="acc90003f0f7412b8daf8a1b6f0f1494">tempest-TestNetworkBasicOps-655467240</nova:project>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       </nova:owner>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <nova:root type="image" uuid="271ec98e-d058-421b-bbfb-4b4a5954c90a"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <nova:ports>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <nova:port uuid="d744a552-c706-444a-8a15-4a98c41eed50">
Jan 23 10:23:45 compute-0 nova_compute[249229]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         </nova:port>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       </nova:ports>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     </nova:instance>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   </metadata>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <sysinfo type="smbios">
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <system>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <entry name="manufacturer">RDO</entry>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <entry name="product">OpenStack Compute</entry>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <entry name="serial">63ed4545-8ad4-406e-be3b-3aaafb68fbcc</entry>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <entry name="uuid">63ed4545-8ad4-406e-be3b-3aaafb68fbcc</entry>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <entry name="family">Virtual Machine</entry>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     </system>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   </sysinfo>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <os>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <boot dev="hd"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <smbios mode="sysinfo"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   </os>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <features>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <acpi/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <apic/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <vmcoreinfo/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   </features>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <clock offset="utc">
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <timer name="hpet" present="no"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   </clock>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <cpu mode="host-model" match="exact">
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   </cpu>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   <devices>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <disk type="network" device="disk">
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <driver type="raw" cache="none"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <source protocol="rbd" name="vms/63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk">
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <host name="192.168.122.100" port="6789"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <host name="192.168.122.102" port="6789"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <host name="192.168.122.101" port="6789"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       </source>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <auth username="openstack">
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <secret type="ceph" uuid="f3005f84-239a-55b6-a948-8f1fb592b920"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       </auth>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <target dev="vda" bus="virtio"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <disk type="network" device="cdrom">
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <driver type="raw" cache="none"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <source protocol="rbd" name="vms/63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk.config">
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <host name="192.168.122.100" port="6789"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <host name="192.168.122.102" port="6789"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <host name="192.168.122.101" port="6789"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       </source>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <auth username="openstack">
Jan 23 10:23:45 compute-0 nova_compute[249229]:         <secret type="ceph" uuid="f3005f84-239a-55b6-a948-8f1fb592b920"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       </auth>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <target dev="sda" bus="sata"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <interface type="ethernet">
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <mac address="fa:16:3e:9f:48:6d"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <model type="virtio"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <mtu size="1442"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <target dev="tapd744a552-c7"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     </interface>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <serial type="pty">
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <log file="/var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc/console.log" append="off"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     </serial>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <video>
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <model type="virtio"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     </video>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <input type="tablet" bus="usb"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <rng model="virtio">
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <backend model="random">/dev/urandom</backend>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     </rng>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <controller type="usb" index="0"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     <memballoon model="virtio">
Jan 23 10:23:45 compute-0 nova_compute[249229]:       <stats period="10"/>
Jan 23 10:23:45 compute-0 nova_compute[249229]:     </memballoon>
Jan 23 10:23:45 compute-0 nova_compute[249229]:   </devices>
Jan 23 10:23:45 compute-0 nova_compute[249229]: </domain>
Jan 23 10:23:45 compute-0 nova_compute[249229]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.764 249233 DEBUG nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Preparing to wait for external event network-vif-plugged-d744a552-c706-444a-8a15-4a98c41eed50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.764 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.765 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.765 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.766 249233 DEBUG nova.virt.libvirt.vif [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-887663066',display_name='tempest-TestNetworkBasicOps-server-887663066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-887663066',id=8,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA/aI2vPj8RkvZIXg0qwsg+mZSpAN4KYb+jWWGi7brg+su0APA02U+0u4zmFgnmB6GMhllEQLzjYT+6n6+qiaS4xy7JGGjDUIERWMZ9GUsTtnQNtbkViktpWv9cmVqG8aA==',key_name='tempest-TestNetworkBasicOps-1244907344',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-ed9ze0ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:23:42Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=63ed4545-8ad4-406e-be3b-3aaafb68fbcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.766 249233 DEBUG nova.network.os_vif_util [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.767 249233 DEBUG nova.network.os_vif_util [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:48:6d,bridge_name='br-int',has_traffic_filtering=True,id=d744a552-c706-444a-8a15-4a98c41eed50,network=Network(2fb57e44-e877-47c8-860b-b36d5b5ff599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd744a552-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.767 249233 DEBUG os_vif [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:48:6d,bridge_name='br-int',has_traffic_filtering=True,id=d744a552-c706-444a-8a15-4a98c41eed50,network=Network(2fb57e44-e877-47c8-860b-b36d5b5ff599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd744a552-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.768 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.768 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.769 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.772 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.773 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd744a552-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.773 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd744a552-c7, col_values=(('external_ids', {'iface-id': 'd744a552-c706-444a-8a15-4a98c41eed50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:48:6d', 'vm-uuid': '63ed4545-8ad4-406e-be3b-3aaafb68fbcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:23:45 compute-0 NetworkManager[48866]: <info>  [1769163825.7758] manager: (tapd744a552-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.777 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.785 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.786 249233 INFO os_vif [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:48:6d,bridge_name='br-int',has_traffic_filtering=True,id=d744a552-c706-444a-8a15-4a98c41eed50,network=Network(2fb57e44-e877-47c8-860b-b36d5b5ff599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd744a552-c7')
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.842 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.842 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.843 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No VIF found with MAC fa:16:3e:9f:48:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.844 249233 INFO nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Using config drive
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.874 249233 DEBUG nova.storage.rbd_utils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:23:45 compute-0 ceph-mon[74335]: pgmap v914: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:23:45 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3313488858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:23:45 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1984605226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.952 249233 DEBUG nova.network.neutron [req-631b04f3-1e24-4070-9157-a92c68c41626 req-deb1ef3e-5d1e-4430-9547-6fbc9475cd54 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Updated VIF entry in instance network info cache for port d744a552-c706-444a-8a15-4a98c41eed50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.953 249233 DEBUG nova.network.neutron [req-631b04f3-1e24-4070-9157-a92c68c41626 req-deb1ef3e-5d1e-4430-9547-6fbc9475cd54 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Updating instance_info_cache with network_info: [{"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:23:45 compute-0 nova_compute[249229]: 2026-01-23 10:23:45.968 249233 DEBUG oslo_concurrency.lockutils [req-631b04f3-1e24-4070-9157-a92c68c41626 req-deb1ef3e-5d1e-4430-9547-6fbc9475cd54 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-63ed4545-8ad4-406e-be3b-3aaafb68fbcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:23:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:46.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.214 249233 INFO nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Creating config drive at /var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc/disk.config
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.218 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxyatxqet execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.343 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxyatxqet" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.381 249233 DEBUG nova.storage.rbd_utils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.387 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc/disk.config 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.562 249233 DEBUG oslo_concurrency.processutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc/disk.config 63ed4545-8ad4-406e-be3b-3aaafb68fbcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.564 249233 INFO nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Deleting local config drive /var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc/disk.config because it was imported into RBD.
Jan 23 10:23:46 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 23 10:23:46 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 23 10:23:46 compute-0 kernel: tapd744a552-c7: entered promiscuous mode
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.662 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:46 compute-0 NetworkManager[48866]: <info>  [1769163826.6662] manager: (tapd744a552-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 23 10:23:46 compute-0 ovn_controller[151634]: 2026-01-23T10:23:46Z|00050|binding|INFO|Claiming lport d744a552-c706-444a-8a15-4a98c41eed50 for this chassis.
Jan 23 10:23:46 compute-0 ovn_controller[151634]: 2026-01-23T10:23:46Z|00051|binding|INFO|d744a552-c706-444a-8a15-4a98c41eed50: Claiming fa:16:3e:9f:48:6d 10.100.0.11
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.669 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.688 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:48:6d 10.100.0.11'], port_security=['fa:16:3e:9f:48:6d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1107750174', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '63ed4545-8ad4-406e-be3b-3aaafb68fbcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fb57e44-e877-47c8-860b-b36d5b5ff599', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1107750174', 'neutron:project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41f899d0-e5bc-43b7-808c-efb54f22dad4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78b908b7-6c71-4e47-8053-0540c37dfe2c, chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], logical_port=d744a552-c706-444a-8a15-4a98c41eed50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.689 161921 INFO neutron.agent.ovn.metadata.agent [-] Port d744a552-c706-444a-8a15-4a98c41eed50 in datapath 2fb57e44-e877-47c8-860b-b36d5b5ff599 bound to our chassis
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.690 161921 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fb57e44-e877-47c8-860b-b36d5b5ff599
Jan 23 10:23:46 compute-0 systemd-machined[216411]: New machine qemu-3-instance-00000008.
Jan 23 10:23:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 68 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 MiB/s wr, 6 op/s
Jan 23 10:23:46 compute-0 systemd-udevd[265249]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.702 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[c226637a-2e43-440c-9dc9-22e81aacfee1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.703 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fb57e44-e1 in ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.705 255218 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fb57e44-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.705 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[8caf4e61-c88b-42b4-b438-dfb07fb9d53c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.706 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[833b8917-edc6-42c8-ae22-cf8910437b9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 NetworkManager[48866]: <info>  [1769163826.7092] device (tapd744a552-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 10:23:46 compute-0 NetworkManager[48866]: <info>  [1769163826.7099] device (tapd744a552-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 10:23:46 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000008.
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.719 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[d47441cb-a2d8-4887-9643-860219bf8f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.736 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[3c40e479-3d8e-4401-8396-3b24da6d3ca6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.743 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:46 compute-0 ovn_controller[151634]: 2026-01-23T10:23:46Z|00052|binding|INFO|Setting lport d744a552-c706-444a-8a15-4a98c41eed50 ovn-installed in OVS
Jan 23 10:23:46 compute-0 ovn_controller[151634]: 2026-01-23T10:23:46Z|00053|binding|INFO|Setting lport d744a552-c706-444a-8a15-4a98c41eed50 up in Southbound
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.748 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.774 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[41dd02fa-a25d-4548-a9f3-73e1ff937b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 systemd-udevd[265252]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.780 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c7e363-9b7b-48b1-8bc4-ba7df987e2d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 NetworkManager[48866]: <info>  [1769163826.7814] manager: (tap2fb57e44-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.807 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[46d1c71e-77f5-42c8-b783-ede95eb23529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.811 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[2b72c5f4-e404-405e-884f-7428119985f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 NetworkManager[48866]: <info>  [1769163826.8272] device (tap2fb57e44-e0): carrier: link connected
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.833 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec5db3b-1528-4bec-8246-01a89b99f9fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.853 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[093c5414-6005-431f-9935-515c5f6cd0d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fb57e44-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4a:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494836, 'reachable_time': 42112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265282, 'error': None, 'target': 'ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.869 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[7f18040e-9b29-48fb-85de-00cb4cc128be]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:4a5f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494836, 'tstamp': 494836}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265284, 'error': None, 'target': 'ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.884 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[9972ea48-57ae-4150-b17a-d3d83c10855a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fb57e44-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4a:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494836, 'reachable_time': 42112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265285, 'error': None, 'target': 'ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.911 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6d1e9b-d409-4728-b6fa-8259ceca92ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.966 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6ce549-ddd9-4901-9b9b-857218307b2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.968 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fb57e44-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.968 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.969 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fb57e44-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:23:46 compute-0 NetworkManager[48866]: <info>  [1769163826.9712] manager: (tap2fb57e44-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 23 10:23:46 compute-0 kernel: tap2fb57e44-e0: entered promiscuous mode
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.970 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:46 compute-0 ovn_controller[151634]: 2026-01-23T10:23:46Z|00054|binding|INFO|Releasing lport 77b74dfc-4c39-4ac5-b1a3-1aa2c0b19a29 from this chassis (sb_readonly=0)
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.973 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fb57e44-e0, col_values=(('external_ids', {'iface-id': '77b74dfc-4c39-4ac5-b1a3-1aa2c0b19a29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.977 161921 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fb57e44-e877-47c8-860b-b36d5b5ff599.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fb57e44-e877-47c8-860b-b36d5b5ff599.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.977 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.978 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ca4e42-afd7-43a1-8138-7601bff5a562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.979 161921 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: global
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     log         /dev/log local0 debug
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     log-tag     haproxy-metadata-proxy-2fb57e44-e877-47c8-860b-b36d5b5ff599
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     user        root
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     group       root
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     maxconn     1024
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     pidfile     /var/lib/neutron/external/pids/2fb57e44-e877-47c8-860b-b36d5b5ff599.pid.haproxy
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     daemon
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: defaults
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     log global
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     mode http
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     option httplog
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     option dontlognull
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     option http-server-close
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     option forwardfor
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     retries                 3
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     timeout http-request    30s
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     timeout connect         30s
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     timeout client          32s
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     timeout server          32s
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     timeout http-keep-alive 30s
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: listen listener
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     bind 169.254.169.254:80
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     server metadata /var/lib/neutron/metadata_proxy
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:     http-request add-header X-OVN-Network-ID 2fb57e44-e877-47c8-860b-b36d5b5ff599
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 23 10:23:46 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:46.979 161921 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599', 'env', 'PROCESS_TAG=haproxy-2fb57e44-e877-47c8-860b-b36d5b5ff599', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fb57e44-e877-47c8-860b-b36d5b5ff599.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 23 10:23:46 compute-0 nova_compute[249229]: 2026-01-23 10:23:46.990 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.207 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769163827.2070851, 63ed4545-8ad4-406e-be3b-3aaafb68fbcc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.208 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] VM Started (Lifecycle Event)
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.238 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.242 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769163827.2080805, 63ed4545-8ad4-406e-be3b-3aaafb68fbcc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.242 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] VM Paused (Lifecycle Event)
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.258 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.261 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.282 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 10:23:47 compute-0 podman[265359]: 2026-01-23 10:23:47.354197162 +0000 UTC m=+0.055732112 container create b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.391 249233 DEBUG nova.compute.manager [req-e2868b1f-3ad9-4258-90b3-4daadfae5a60 req-ae0dfad3-fb12-43dc-9569-a6ce3c96fb13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Received event network-vif-plugged-d744a552-c706-444a-8a15-4a98c41eed50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.392 249233 DEBUG oslo_concurrency.lockutils [req-e2868b1f-3ad9-4258-90b3-4daadfae5a60 req-ae0dfad3-fb12-43dc-9569-a6ce3c96fb13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.393 249233 DEBUG oslo_concurrency.lockutils [req-e2868b1f-3ad9-4258-90b3-4daadfae5a60 req-ae0dfad3-fb12-43dc-9569-a6ce3c96fb13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:47 compute-0 systemd[1]: Started libpod-conmon-b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07.scope.
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.393 249233 DEBUG oslo_concurrency.lockutils [req-e2868b1f-3ad9-4258-90b3-4daadfae5a60 req-ae0dfad3-fb12-43dc-9569-a6ce3c96fb13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.394 249233 DEBUG nova.compute.manager [req-e2868b1f-3ad9-4258-90b3-4daadfae5a60 req-ae0dfad3-fb12-43dc-9569-a6ce3c96fb13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Processing event network-vif-plugged-d744a552-c706-444a-8a15-4a98c41eed50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.395 249233 DEBUG nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.399 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769163827.3992434, 63ed4545-8ad4-406e-be3b-3aaafb68fbcc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.399 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] VM Resumed (Lifecycle Event)
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.401 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.404 249233 INFO nova.virt.libvirt.driver [-] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Instance spawned successfully.
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.404 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 10:23:47 compute-0 podman[265359]: 2026-01-23 10:23:47.322807147 +0000 UTC m=+0.024342117 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.423 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:23:47 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.427 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.428 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.428 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.429 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.429 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.430 249233 DEBUG nova.virt.libvirt.driver [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ff13d6c6ea93f969e0a894af456eb73865024cd2b2d15a7913ea00a8ac823f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.438 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 10:23:47 compute-0 podman[265359]: 2026-01-23 10:23:47.445438773 +0000 UTC m=+0.146973753 container init b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 10:23:47 compute-0 podman[265359]: 2026-01-23 10:23:47.450453202 +0000 UTC m=+0.151988162 container start b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.464 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 10:23:47 compute-0 neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599[265374]: [NOTICE]   (265378) : New worker (265380) forked
Jan 23 10:23:47 compute-0 neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599[265374]: [NOTICE]   (265378) : Loading success.
Jan 23 10:23:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.504 249233 INFO nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Took 5.15 seconds to spawn the instance on the hypervisor.
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.505 249233 DEBUG nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.568 249233 INFO nova.compute.manager [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Took 6.06 seconds to build instance.
Jan 23 10:23:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:47 compute-0 nova_compute[249229]: 2026-01-23 10:23:47.592 249233 DEBUG oslo_concurrency.lockutils [None req-e491d4ae-1929-4bed-974e-cace68ae2fe5 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:47.784Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:47 compute-0 ceph-mon[74335]: pgmap v915: 353 pgs: 353 active+clean; 68 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 MiB/s wr, 6 op/s
Jan 23 10:23:48 compute-0 nova_compute[249229]: 2026-01-23 10:23:48.081 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:48.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:48 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 23 10:23:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:23:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1571219015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:23:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1571219015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:23:48 compute-0 ceph-mon[74335]: pgmap v916: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:23:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:49 compute-0 nova_compute[249229]: 2026-01-23 10:23:49.491 249233 DEBUG nova.compute.manager [req-488fe767-f04a-4ea6-a8ba-f1c57ad05ed8 req-51c6c2bc-8533-407e-9e90-35a2fbaca44e 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Received event network-vif-plugged-d744a552-c706-444a-8a15-4a98c41eed50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:23:49 compute-0 nova_compute[249229]: 2026-01-23 10:23:49.491 249233 DEBUG oslo_concurrency.lockutils [req-488fe767-f04a-4ea6-a8ba-f1c57ad05ed8 req-51c6c2bc-8533-407e-9e90-35a2fbaca44e 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:49 compute-0 nova_compute[249229]: 2026-01-23 10:23:49.492 249233 DEBUG oslo_concurrency.lockutils [req-488fe767-f04a-4ea6-a8ba-f1c57ad05ed8 req-51c6c2bc-8533-407e-9e90-35a2fbaca44e 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:49 compute-0 nova_compute[249229]: 2026-01-23 10:23:49.492 249233 DEBUG oslo_concurrency.lockutils [req-488fe767-f04a-4ea6-a8ba-f1c57ad05ed8 req-51c6c2bc-8533-407e-9e90-35a2fbaca44e 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:49 compute-0 nova_compute[249229]: 2026-01-23 10:23:49.492 249233 DEBUG nova.compute.manager [req-488fe767-f04a-4ea6-a8ba-f1c57ad05ed8 req-51c6c2bc-8533-407e-9e90-35a2fbaca44e 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] No waiting events found dispatching network-vif-plugged-d744a552-c706-444a-8a15-4a98c41eed50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:23:49 compute-0 nova_compute[249229]: 2026-01-23 10:23:49.492 249233 WARNING nova.compute.manager [req-488fe767-f04a-4ea6-a8ba-f1c57ad05ed8 req-51c6c2bc-8533-407e-9e90-35a2fbaca44e 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Received unexpected event network-vif-plugged-d744a552-c706-444a-8a15-4a98c41eed50 for instance with vm_state active and task_state None.
Jan 23 10:23:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:49 compute-0 sudo[265391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:23:49 compute-0 sudo[265391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:49 compute-0 sudo[265391]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:49 compute-0 sudo[265423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:23:49 compute-0 sudo[265423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:49] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:23:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:49] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:23:49 compute-0 podman[265416]: 2026-01-23 10:23:49.991683134 +0000 UTC m=+0.100113546 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 10:23:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:23:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:23:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:23:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:23:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:23:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:23:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:23:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:23:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:23:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:50.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:50 compute-0 sudo[265423]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:23:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:23:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:23:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:23:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:23:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:23:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:23:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:23:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:23:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:23:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:23:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:23:50 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:23:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:23:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:23:50 compute-0 sudo[265500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:23:50 compute-0 sudo[265500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:50 compute-0 sudo[265500]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:50 compute-0 nova_compute[249229]: 2026-01-23 10:23:50.775 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:50 compute-0 sudo[265525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:23:50 compute-0 sudo[265525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:50.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:23:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:23:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:23:51 compute-0 ceph-mon[74335]: pgmap v917: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:23:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:23:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:23:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:23:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:23:51 compute-0 podman[265592]: 2026-01-23 10:23:51.240086823 +0000 UTC m=+0.045253700 container create 143967408e50e48442f7212c35ef41ae6a4b105465755a7afa7ca38c95a0c30a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 10:23:51 compute-0 systemd[1]: Started libpod-conmon-143967408e50e48442f7212c35ef41ae6a4b105465755a7afa7ca38c95a0c30a.scope.
Jan 23 10:23:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:23:51 compute-0 podman[265592]: 2026-01-23 10:23:51.224960712 +0000 UTC m=+0.030127609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:23:51 compute-0 podman[265592]: 2026-01-23 10:23:51.326703576 +0000 UTC m=+0.131870473 container init 143967408e50e48442f7212c35ef41ae6a4b105465755a7afa7ca38c95a0c30a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 10:23:51 compute-0 podman[265592]: 2026-01-23 10:23:51.333328683 +0000 UTC m=+0.138495560 container start 143967408e50e48442f7212c35ef41ae6a4b105465755a7afa7ca38c95a0c30a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 10:23:51 compute-0 podman[265592]: 2026-01-23 10:23:51.336992732 +0000 UTC m=+0.142159639 container attach 143967408e50e48442f7212c35ef41ae6a4b105465755a7afa7ca38c95a0c30a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_tharp, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:23:51 compute-0 youthful_tharp[265609]: 167 167
Jan 23 10:23:51 compute-0 systemd[1]: libpod-143967408e50e48442f7212c35ef41ae6a4b105465755a7afa7ca38c95a0c30a.scope: Deactivated successfully.
Jan 23 10:23:51 compute-0 podman[265592]: 2026-01-23 10:23:51.340061004 +0000 UTC m=+0.145227881 container died 143967408e50e48442f7212c35ef41ae6a4b105465755a7afa7ca38c95a0c30a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-32f6a951243cf19119e4d7f9671626a11ca4bc4c3db57969f60f25a6ad6340b3-merged.mount: Deactivated successfully.
Jan 23 10:23:51 compute-0 podman[265592]: 2026-01-23 10:23:51.378812449 +0000 UTC m=+0.183979326 container remove 143967408e50e48442f7212c35ef41ae6a4b105465755a7afa7ca38c95a0c30a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:23:51 compute-0 systemd[1]: libpod-conmon-143967408e50e48442f7212c35ef41ae6a4b105465755a7afa7ca38c95a0c30a.scope: Deactivated successfully.
Jan 23 10:23:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:51 compute-0 podman[265633]: 2026-01-23 10:23:51.552282871 +0000 UTC m=+0.042897730 container create b407f7365f52e2464bcce1621e4161f6127415adc79dbc2fc7912632ecc5fb4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 10:23:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:51 compute-0 systemd[1]: Started libpod-conmon-b407f7365f52e2464bcce1621e4161f6127415adc79dbc2fc7912632ecc5fb4f.scope.
Jan 23 10:23:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcd5a51954e72abc837f3673236406ea1561188e19f10cdda7b8be6581d9484/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:51 compute-0 podman[265633]: 2026-01-23 10:23:51.534885612 +0000 UTC m=+0.025500491 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcd5a51954e72abc837f3673236406ea1561188e19f10cdda7b8be6581d9484/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcd5a51954e72abc837f3673236406ea1561188e19f10cdda7b8be6581d9484/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcd5a51954e72abc837f3673236406ea1561188e19f10cdda7b8be6581d9484/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcd5a51954e72abc837f3673236406ea1561188e19f10cdda7b8be6581d9484/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:51 compute-0 podman[265633]: 2026-01-23 10:23:51.661064764 +0000 UTC m=+0.151679633 container init b407f7365f52e2464bcce1621e4161f6127415adc79dbc2fc7912632ecc5fb4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_stonebraker, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:23:51 compute-0 podman[265633]: 2026-01-23 10:23:51.669898378 +0000 UTC m=+0.160513237 container start b407f7365f52e2464bcce1621e4161f6127415adc79dbc2fc7912632ecc5fb4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 10:23:51 compute-0 podman[265633]: 2026-01-23 10:23:51.673803674 +0000 UTC m=+0.164418543 container attach b407f7365f52e2464bcce1621e4161f6127415adc79dbc2fc7912632ecc5fb4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 10:23:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:51 compute-0 nova_compute[249229]: 2026-01-23 10:23:51.896 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:51 compute-0 NetworkManager[48866]: <info>  [1769163831.8967] manager: (patch-br-int-to-provnet-995e8c2d-ca55-405c-bf26-97e408875e42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 23 10:23:51 compute-0 NetworkManager[48866]: <info>  [1769163831.8975] manager: (patch-provnet-995e8c2d-ca55-405c-bf26-97e408875e42-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Jan 23 10:23:51 compute-0 ovn_controller[151634]: 2026-01-23T10:23:51Z|00055|binding|INFO|Releasing lport 77b74dfc-4c39-4ac5-b1a3-1aa2c0b19a29 from this chassis (sb_readonly=0)
Jan 23 10:23:51 compute-0 ovn_controller[151634]: 2026-01-23T10:23:51Z|00056|binding|INFO|Releasing lport 77b74dfc-4c39-4ac5-b1a3-1aa2c0b19a29 from this chassis (sb_readonly=0)
Jan 23 10:23:51 compute-0 nova_compute[249229]: 2026-01-23 10:23:51.931 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:51 compute-0 nova_compute[249229]: 2026-01-23 10:23:51.935 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:52 compute-0 busy_stonebraker[265649]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:23:52 compute-0 busy_stonebraker[265649]: --> All data devices are unavailable
Jan 23 10:23:52 compute-0 systemd[1]: libpod-b407f7365f52e2464bcce1621e4161f6127415adc79dbc2fc7912632ecc5fb4f.scope: Deactivated successfully.
Jan 23 10:23:52 compute-0 podman[265633]: 2026-01-23 10:23:52.027745856 +0000 UTC m=+0.518360715 container died b407f7365f52e2464bcce1621e4161f6127415adc79dbc2fc7912632ecc5fb4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 10:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddcd5a51954e72abc837f3673236406ea1561188e19f10cdda7b8be6581d9484-merged.mount: Deactivated successfully.
Jan 23 10:23:52 compute-0 podman[265633]: 2026-01-23 10:23:52.068776119 +0000 UTC m=+0.559390978 container remove b407f7365f52e2464bcce1621e4161f6127415adc79dbc2fc7912632ecc5fb4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 23 10:23:52 compute-0 systemd[1]: libpod-conmon-b407f7365f52e2464bcce1621e4161f6127415adc79dbc2fc7912632ecc5fb4f.scope: Deactivated successfully.
Jan 23 10:23:52 compute-0 sudo[265525]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:52 compute-0 sudo[265679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:23:52 compute-0 sudo[265679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:52 compute-0 sudo[265679]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:52.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:52 compute-0 sudo[265704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:23:52 compute-0 sudo[265704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.483 249233 DEBUG nova.compute.manager [req-5a65b80b-c37f-411f-8c61-e31825588555 req-612c1e33-74e9-408e-800e-8052f93fa320 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Received event network-changed-d744a552-c706-444a-8a15-4a98c41eed50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.484 249233 DEBUG nova.compute.manager [req-5a65b80b-c37f-411f-8c61-e31825588555 req-612c1e33-74e9-408e-800e-8052f93fa320 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Refreshing instance network info cache due to event network-changed-d744a552-c706-444a-8a15-4a98c41eed50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.484 249233 DEBUG oslo_concurrency.lockutils [req-5a65b80b-c37f-411f-8c61-e31825588555 req-612c1e33-74e9-408e-800e-8052f93fa320 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-63ed4545-8ad4-406e-be3b-3aaafb68fbcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.484 249233 DEBUG oslo_concurrency.lockutils [req-5a65b80b-c37f-411f-8c61-e31825588555 req-612c1e33-74e9-408e-800e-8052f93fa320 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-63ed4545-8ad4-406e-be3b-3aaafb68fbcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.484 249233 DEBUG nova.network.neutron [req-5a65b80b-c37f-411f-8c61-e31825588555 req-612c1e33-74e9-408e-800e-8052f93fa320 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Refreshing network info cache for port d744a552-c706-444a-8a15-4a98c41eed50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:23:52 compute-0 podman[265770]: 2026-01-23 10:23:52.642294287 +0000 UTC m=+0.053182486 container create 01faed8df8bfaef6535f1f7316ddebf9254953c1b25a5dd58b093b73822398a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nightingale, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 10:23:52 compute-0 systemd[1]: Started libpod-conmon-01faed8df8bfaef6535f1f7316ddebf9254953c1b25a5dd58b093b73822398a4.scope.
Jan 23 10:23:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.701 249233 DEBUG oslo_concurrency.lockutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:52 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.703 249233 DEBUG oslo_concurrency.lockutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.703 249233 DEBUG oslo_concurrency.lockutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.704 249233 DEBUG oslo_concurrency.lockutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.704 249233 DEBUG oslo_concurrency.lockutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.705 249233 INFO nova.compute.manager [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Terminating instance
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.707 249233 DEBUG nova.compute.manager [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 10:23:52 compute-0 podman[265770]: 2026-01-23 10:23:52.615853139 +0000 UTC m=+0.026741398 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:23:52 compute-0 podman[265770]: 2026-01-23 10:23:52.718006724 +0000 UTC m=+0.128894943 container init 01faed8df8bfaef6535f1f7316ddebf9254953c1b25a5dd58b093b73822398a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nightingale, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 10:23:52 compute-0 podman[265770]: 2026-01-23 10:23:52.723429456 +0000 UTC m=+0.134317655 container start 01faed8df8bfaef6535f1f7316ddebf9254953c1b25a5dd58b093b73822398a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 10:23:52 compute-0 laughing_nightingale[265786]: 167 167
Jan 23 10:23:52 compute-0 systemd[1]: libpod-01faed8df8bfaef6535f1f7316ddebf9254953c1b25a5dd58b093b73822398a4.scope: Deactivated successfully.
Jan 23 10:23:52 compute-0 podman[265770]: 2026-01-23 10:23:52.728446826 +0000 UTC m=+0.139335025 container attach 01faed8df8bfaef6535f1f7316ddebf9254953c1b25a5dd58b093b73822398a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:23:52 compute-0 podman[265770]: 2026-01-23 10:23:52.728816647 +0000 UTC m=+0.139704866 container died 01faed8df8bfaef6535f1f7316ddebf9254953c1b25a5dd58b093b73822398a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 10:23:52 compute-0 kernel: tapd744a552-c7 (unregistering): left promiscuous mode
Jan 23 10:23:52 compute-0 NetworkManager[48866]: <info>  [1769163832.7495] device (tapd744a552-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 10:23:52 compute-0 ovn_controller[151634]: 2026-01-23T10:23:52Z|00057|binding|INFO|Releasing lport d744a552-c706-444a-8a15-4a98c41eed50 from this chassis (sb_readonly=0)
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.757 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:52 compute-0 ovn_controller[151634]: 2026-01-23T10:23:52Z|00058|binding|INFO|Setting lport d744a552-c706-444a-8a15-4a98c41eed50 down in Southbound
Jan 23 10:23:52 compute-0 ovn_controller[151634]: 2026-01-23T10:23:52Z|00059|binding|INFO|Removing iface tapd744a552-c7 ovn-installed in OVS
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.761 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-27559a3584c0d799780611bbdf6a011bda24f2fd24da7df71090d9cd08e6c30c-merged.mount: Deactivated successfully.
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.779 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:52 compute-0 podman[265770]: 2026-01-23 10:23:52.781022713 +0000 UTC m=+0.191910912 container remove 01faed8df8bfaef6535f1f7316ddebf9254953c1b25a5dd58b093b73822398a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:23:52 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:52.790 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:48:6d 10.100.0.11'], port_security=['fa:16:3e:9f:48:6d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1107750174', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '63ed4545-8ad4-406e-be3b-3aaafb68fbcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fb57e44-e877-47c8-860b-b36d5b5ff599', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1107750174', 'neutron:project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41f899d0-e5bc-43b7-808c-efb54f22dad4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78b908b7-6c71-4e47-8053-0540c37dfe2c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], logical_port=d744a552-c706-444a-8a15-4a98c41eed50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:23:52 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:52.792 161921 INFO neutron.agent.ovn.metadata.agent [-] Port d744a552-c706-444a-8a15-4a98c41eed50 in datapath 2fb57e44-e877-47c8-860b-b36d5b5ff599 unbound from our chassis
Jan 23 10:23:52 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:52.793 161921 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fb57e44-e877-47c8-860b-b36d5b5ff599, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 23 10:23:52 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:52.795 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[e29fb963-d2fe-410d-944f-23487db82b61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:52 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:52.795 161921 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599 namespace which is not needed anymore
Jan 23 10:23:52 compute-0 systemd[1]: libpod-conmon-01faed8df8bfaef6535f1f7316ddebf9254953c1b25a5dd58b093b73822398a4.scope: Deactivated successfully.
Jan 23 10:23:52 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 23 10:23:52 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Consumed 5.973s CPU time.
Jan 23 10:23:52 compute-0 systemd-machined[216411]: Machine qemu-3-instance-00000008 terminated.
Jan 23 10:23:52 compute-0 neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599[265374]: [NOTICE]   (265378) : haproxy version is 2.8.14-c23fe91
Jan 23 10:23:52 compute-0 neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599[265374]: [NOTICE]   (265378) : path to executable is /usr/sbin/haproxy
Jan 23 10:23:52 compute-0 neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599[265374]: [WARNING]  (265378) : Exiting Master process...
Jan 23 10:23:52 compute-0 neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599[265374]: [WARNING]  (265378) : Exiting Master process...
Jan 23 10:23:52 compute-0 neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599[265374]: [ALERT]    (265378) : Current worker (265380) exited with code 143 (Terminated)
Jan 23 10:23:52 compute-0 neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599[265374]: [WARNING]  (265378) : All workers exited. Exiting... (0)
Jan 23 10:23:52 compute-0 systemd[1]: libpod-b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07.scope: Deactivated successfully.
Jan 23 10:23:52 compute-0 podman[265828]: 2026-01-23 10:23:52.929206101 +0000 UTC m=+0.045291421 container died b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.947 249233 INFO nova.virt.libvirt.driver [-] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Instance destroyed successfully.
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.948 249233 DEBUG nova.objects.instance [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'resources' on Instance uuid 63ed4545-8ad4-406e-be3b-3aaafb68fbcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:23:52 compute-0 podman[265838]: 2026-01-23 10:23:52.950552607 +0000 UTC m=+0.047627301 container create 545f6115c9c111ed44c3355ef08750f83cf40ac88fbc42518e2fc16e05badebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_banzai, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-35ff13d6c6ea93f969e0a894af456eb73865024cd2b2d15a7913ea00a8ac823f-merged.mount: Deactivated successfully.
Jan 23 10:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07-userdata-shm.mount: Deactivated successfully.
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.966 249233 DEBUG nova.virt.libvirt.vif [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-887663066',display_name='tempest-TestNetworkBasicOps-server-887663066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-887663066',id=8,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA/aI2vPj8RkvZIXg0qwsg+mZSpAN4KYb+jWWGi7brg+su0APA02U+0u4zmFgnmB6GMhllEQLzjYT+6n6+qiaS4xy7JGGjDUIERWMZ9GUsTtnQNtbkViktpWv9cmVqG8aA==',key_name='tempest-TestNetworkBasicOps-1244907344',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:23:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-ed9ze0ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:23:47Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=63ed4545-8ad4-406e-be3b-3aaafb68fbcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.967 249233 DEBUG nova.network.os_vif_util [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.968 249233 DEBUG nova.network.os_vif_util [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:48:6d,bridge_name='br-int',has_traffic_filtering=True,id=d744a552-c706-444a-8a15-4a98c41eed50,network=Network(2fb57e44-e877-47c8-860b-b36d5b5ff599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd744a552-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.968 249233 DEBUG os_vif [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:48:6d,bridge_name='br-int',has_traffic_filtering=True,id=d744a552-c706-444a-8a15-4a98c41eed50,network=Network(2fb57e44-e877-47c8-860b-b36d5b5ff599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd744a552-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.970 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:52 compute-0 podman[265828]: 2026-01-23 10:23:52.970557604 +0000 UTC m=+0.086642914 container cleanup b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.970 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd744a552-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.976 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:23:52 compute-0 nova_compute[249229]: 2026-01-23 10:23:52.978 249233 INFO os_vif [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:48:6d,bridge_name='br-int',has_traffic_filtering=True,id=d744a552-c706-444a-8a15-4a98c41eed50,network=Network(2fb57e44-e877-47c8-860b-b36d5b5ff599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd744a552-c7')
Jan 23 10:23:52 compute-0 systemd[1]: libpod-conmon-b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07.scope: Deactivated successfully.
Jan 23 10:23:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:52.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:53 compute-0 systemd[1]: Started libpod-conmon-545f6115c9c111ed44c3355ef08750f83cf40ac88fbc42518e2fc16e05badebb.scope.
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.011 249233 DEBUG nova.compute.manager [req-ddfffa2f-5fd6-4c55-9355-f290b7eba758 req-cdc9c653-4c76-49ff-87de-fabbd640df13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Received event network-vif-unplugged-d744a552-c706-444a-8a15-4a98c41eed50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.011 249233 DEBUG oslo_concurrency.lockutils [req-ddfffa2f-5fd6-4c55-9355-f290b7eba758 req-cdc9c653-4c76-49ff-87de-fabbd640df13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.011 249233 DEBUG oslo_concurrency.lockutils [req-ddfffa2f-5fd6-4c55-9355-f290b7eba758 req-cdc9c653-4c76-49ff-87de-fabbd640df13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.012 249233 DEBUG oslo_concurrency.lockutils [req-ddfffa2f-5fd6-4c55-9355-f290b7eba758 req-cdc9c653-4c76-49ff-87de-fabbd640df13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.012 249233 DEBUG nova.compute.manager [req-ddfffa2f-5fd6-4c55-9355-f290b7eba758 req-cdc9c653-4c76-49ff-87de-fabbd640df13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] No waiting events found dispatching network-vif-unplugged-d744a552-c706-444a-8a15-4a98c41eed50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.012 249233 DEBUG nova.compute.manager [req-ddfffa2f-5fd6-4c55-9355-f290b7eba758 req-cdc9c653-4c76-49ff-87de-fabbd640df13 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Received event network-vif-unplugged-d744a552-c706-444a-8a15-4a98c41eed50 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 23 10:23:53 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:23:53 compute-0 podman[265838]: 2026-01-23 10:23:52.932172769 +0000 UTC m=+0.029247463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ddd36185020365e0adc84156d8e890a0b8b4775fe056d5e20a74c74c79de72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ddd36185020365e0adc84156d8e890a0b8b4775fe056d5e20a74c74c79de72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ddd36185020365e0adc84156d8e890a0b8b4775fe056d5e20a74c74c79de72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ddd36185020365e0adc84156d8e890a0b8b4775fe056d5e20a74c74c79de72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:53 compute-0 podman[265887]: 2026-01-23 10:23:53.036672705 +0000 UTC m=+0.042918431 container remove b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:23:53 compute-0 podman[265838]: 2026-01-23 10:23:53.041826189 +0000 UTC m=+0.138900903 container init 545f6115c9c111ed44c3355ef08750f83cf40ac88fbc42518e2fc16e05badebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_banzai, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:23:53 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:53.041 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[23ec60e8-13db-483c-b3ff-7d8c1533b099]: (4, ('Fri Jan 23 10:23:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599 (b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07)\nb32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07\nFri Jan 23 10:23:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599 (b32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07)\nb32b674b9d53093bf4462ffd0b5c39e0a28039dbcd3f8d8e36d9c29dd751ca07\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:53 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:53.043 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[95691c34-2cee-4fb1-b0ce-a6a521a6d1f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:53 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:53.044 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fb57e44-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.046 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:53 compute-0 podman[265838]: 2026-01-23 10:23:53.049859228 +0000 UTC m=+0.146933922 container start 545f6115c9c111ed44c3355ef08750f83cf40ac88fbc42518e2fc16e05badebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 23 10:23:53 compute-0 podman[265838]: 2026-01-23 10:23:53.052961711 +0000 UTC m=+0.150036405 container attach 545f6115c9c111ed44c3355ef08750f83cf40ac88fbc42518e2fc16e05badebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_banzai, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:23:53 compute-0 kernel: tap2fb57e44-e0: left promiscuous mode
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.061 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:53 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:53.066 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ec9a06-8d03-4f5f-ad5a-a902e62d631a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.082 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:53 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:53.085 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[00211369-73be-4a69-81ec-fd6f7ec103c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:53 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:53.087 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[59a1181f-9556-420a-8e15-8a260cacf9d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:53 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:53.101 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[c14417d9-45df-468c-b8e6-29ea69b5041d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494830, 'reachable_time': 28119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265927, 'error': None, 'target': 'ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:53 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:53.104 162436 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fb57e44-e877-47c8-860b-b36d5b5ff599 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 23 10:23:53 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:53.104 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[cf210b2f-b8b9-4441-b11f-914c96e17142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]: {
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:     "1": [
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:         {
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "devices": [
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "/dev/loop3"
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             ],
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "lv_name": "ceph_lv0",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "lv_size": "21470642176",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "name": "ceph_lv0",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "tags": {
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.cluster_name": "ceph",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.crush_device_class": "",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.encrypted": "0",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.osd_id": "1",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.type": "block",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.vdo": "0",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:                 "ceph.with_tpm": "0"
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             },
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "type": "block",
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:             "vg_name": "ceph_vg0"
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:         }
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]:     ]
Jan 23 10:23:53 compute-0 upbeat_banzai[265911]: }
Jan 23 10:23:53 compute-0 systemd[1]: libpod-545f6115c9c111ed44c3355ef08750f83cf40ac88fbc42518e2fc16e05badebb.scope: Deactivated successfully.
Jan 23 10:23:53 compute-0 podman[265838]: 2026-01-23 10:23:53.375871858 +0000 UTC m=+0.472946542 container died 545f6115c9c111ed44c3355ef08750f83cf40ac88fbc42518e2fc16e05badebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_banzai, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.406 249233 INFO nova.virt.libvirt.driver [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Deleting instance files /var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc_del
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.407 249233 INFO nova.virt.libvirt.driver [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Deletion of /var/lib/nova/instances/63ed4545-8ad4-406e-be3b-3aaafb68fbcc_del complete
Jan 23 10:23:53 compute-0 podman[265838]: 2026-01-23 10:23:53.416423477 +0000 UTC m=+0.513498171 container remove 545f6115c9c111ed44c3355ef08750f83cf40ac88fbc42518e2fc16e05badebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 10:23:53 compute-0 systemd[1]: libpod-conmon-545f6115c9c111ed44c3355ef08750f83cf40ac88fbc42518e2fc16e05badebb.scope: Deactivated successfully.
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.458 249233 INFO nova.compute.manager [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.458 249233 DEBUG oslo.service.loopingcall [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.459 249233 DEBUG nova.compute.manager [-] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 10:23:53 compute-0 nova_compute[249229]: 2026-01-23 10:23:53.459 249233 DEBUG nova.network.neutron [-] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 10:23:53 compute-0 sudo[265704]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:53 compute-0 sudo[265946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:23:53 compute-0 sudo[265946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:53 compute-0 sudo[265946]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:53 compute-0 sudo[265971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:23:53 compute-0 sudo[265971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ddd36185020365e0adc84156d8e890a0b8b4775fe056d5e20a74c74c79de72-merged.mount: Deactivated successfully.
Jan 23 10:23:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d2fb57e44\x2de877\x2d47c8\x2d860b\x2db36d5b5ff599.mount: Deactivated successfully.
Jan 23 10:23:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:53.676Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:23:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:53.678Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:53 compute-0 ceph-mon[74335]: pgmap v918: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 23 10:23:53 compute-0 podman[266038]: 2026-01-23 10:23:53.962877148 +0000 UTC m=+0.036676414 container create d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:23:53 compute-0 systemd[1]: Started libpod-conmon-d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e.scope.
Jan 23 10:23:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:23:54 compute-0 podman[266038]: 2026-01-23 10:23:54.037132822 +0000 UTC m=+0.110932108 container init d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:23:54 compute-0 podman[266038]: 2026-01-23 10:23:53.947749557 +0000 UTC m=+0.021548853 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:23:54 compute-0 podman[266038]: 2026-01-23 10:23:54.044130461 +0000 UTC m=+0.117929747 container start d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 23 10:23:54 compute-0 podman[266038]: 2026-01-23 10:23:54.047487111 +0000 UTC m=+0.121286397 container attach d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 10:23:54 compute-0 determined_shamir[266054]: 167 167
Jan 23 10:23:54 compute-0 systemd[1]: libpod-d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e.scope: Deactivated successfully.
Jan 23 10:23:54 compute-0 conmon[266054]: conmon d2e26083b94ea9c41f0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e.scope/container/memory.events
Jan 23 10:23:54 compute-0 podman[266038]: 2026-01-23 10:23:54.050934204 +0000 UTC m=+0.124733470 container died d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 10:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba7d161f33a2dfaa89b9af6f4c05d14ae7d50e1df9d9720c3744b571f6275041-merged.mount: Deactivated successfully.
Jan 23 10:23:54 compute-0 podman[266038]: 2026-01-23 10:23:54.095908054 +0000 UTC m=+0.169707310 container remove d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:23:54 compute-0 systemd[1]: libpod-conmon-d2e26083b94ea9c41f0d21e40c1bfc4c3082c5a5150000550c794e28162b333e.scope: Deactivated successfully.
Jan 23 10:23:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:54.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:54 compute-0 podman[266080]: 2026-01-23 10:23:54.294735092 +0000 UTC m=+0.039546030 container create 90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mcnulty, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:23:54 compute-0 systemd[1]: Started libpod-conmon-90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954.scope.
Jan 23 10:23:54 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8103e766fd59a75942ad6bbbbbccc6bc6c79fe02f2b53b11e3b99eb15390597/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8103e766fd59a75942ad6bbbbbccc6bc6c79fe02f2b53b11e3b99eb15390597/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8103e766fd59a75942ad6bbbbbccc6bc6c79fe02f2b53b11e3b99eb15390597/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8103e766fd59a75942ad6bbbbbccc6bc6c79fe02f2b53b11e3b99eb15390597/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:23:54 compute-0 podman[266080]: 2026-01-23 10:23:54.277696214 +0000 UTC m=+0.022507162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:23:54 compute-0 podman[266080]: 2026-01-23 10:23:54.379577592 +0000 UTC m=+0.124388520 container init 90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 10:23:54 compute-0 podman[266080]: 2026-01-23 10:23:54.385763106 +0000 UTC m=+0.130574034 container start 90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mcnulty, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:23:54 compute-0 podman[266080]: 2026-01-23 10:23:54.391136416 +0000 UTC m=+0.135947374 container attach 90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 23 10:23:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 23 10:23:54 compute-0 nova_compute[249229]: 2026-01-23 10:23:54.736 249233 DEBUG nova.network.neutron [req-5a65b80b-c37f-411f-8c61-e31825588555 req-612c1e33-74e9-408e-800e-8052f93fa320 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Updated VIF entry in instance network info cache for port d744a552-c706-444a-8a15-4a98c41eed50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:23:54 compute-0 nova_compute[249229]: 2026-01-23 10:23:54.736 249233 DEBUG nova.network.neutron [req-5a65b80b-c37f-411f-8c61-e31825588555 req-612c1e33-74e9-408e-800e-8052f93fa320 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Updating instance_info_cache with network_info: [{"id": "d744a552-c706-444a-8a15-4a98c41eed50", "address": "fa:16:3e:9f:48:6d", "network": {"id": "2fb57e44-e877-47c8-860b-b36d5b5ff599", "bridge": "br-int", "label": "tempest-network-smoke--2143346610", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd744a552-c7", "ovs_interfaceid": "d744a552-c706-444a-8a15-4a98c41eed50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:23:54 compute-0 nova_compute[249229]: 2026-01-23 10:23:54.754 249233 DEBUG oslo_concurrency.lockutils [req-5a65b80b-c37f-411f-8c61-e31825588555 req-612c1e33-74e9-408e-800e-8052f93fa320 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-63ed4545-8ad4-406e-be3b-3aaafb68fbcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:23:54 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102354 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 23 10:23:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:55.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:55 compute-0 lvm[266170]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:23:55 compute-0 lvm[266170]: VG ceph_vg0 finished
Jan 23 10:23:55 compute-0 practical_mcnulty[266096]: {}
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.104 249233 DEBUG nova.compute.manager [req-12b47049-c3d9-4473-8897-3455bae2c8fc req-353a31c7-69b6-4806-b8f0-0dd33ff1991c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Received event network-vif-plugged-d744a552-c706-444a-8a15-4a98c41eed50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.105 249233 DEBUG oslo_concurrency.lockutils [req-12b47049-c3d9-4473-8897-3455bae2c8fc req-353a31c7-69b6-4806-b8f0-0dd33ff1991c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.105 249233 DEBUG oslo_concurrency.lockutils [req-12b47049-c3d9-4473-8897-3455bae2c8fc req-353a31c7-69b6-4806-b8f0-0dd33ff1991c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.105 249233 DEBUG oslo_concurrency.lockutils [req-12b47049-c3d9-4473-8897-3455bae2c8fc req-353a31c7-69b6-4806-b8f0-0dd33ff1991c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.105 249233 DEBUG nova.compute.manager [req-12b47049-c3d9-4473-8897-3455bae2c8fc req-353a31c7-69b6-4806-b8f0-0dd33ff1991c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] No waiting events found dispatching network-vif-plugged-d744a552-c706-444a-8a15-4a98c41eed50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.105 249233 WARNING nova.compute.manager [req-12b47049-c3d9-4473-8897-3455bae2c8fc req-353a31c7-69b6-4806-b8f0-0dd33ff1991c 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Received unexpected event network-vif-plugged-d744a552-c706-444a-8a15-4a98c41eed50 for instance with vm_state active and task_state deleting.
Jan 23 10:23:55 compute-0 systemd[1]: libpod-90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954.scope: Deactivated successfully.
Jan 23 10:23:55 compute-0 systemd[1]: libpod-90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954.scope: Consumed 1.197s CPU time.
Jan 23 10:23:55 compute-0 podman[266080]: 2026-01-23 10:23:55.120655726 +0000 UTC m=+0.865466644 container died 90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mcnulty, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8103e766fd59a75942ad6bbbbbccc6bc6c79fe02f2b53b11e3b99eb15390597-merged.mount: Deactivated successfully.
Jan 23 10:23:55 compute-0 podman[266080]: 2026-01-23 10:23:55.160808633 +0000 UTC m=+0.905619561 container remove 90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mcnulty, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:23:55 compute-0 systemd[1]: libpod-conmon-90b45d45d82fa0285ce592041dde2467ce738d7313cafa1b29adce79ff3a1954.scope: Deactivated successfully.
Jan 23 10:23:55 compute-0 sudo[265971]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:23:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:23:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:23:55 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:23:55 compute-0 sudo[266185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:23:55 compute-0 sudo[266185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:23:55 compute-0 sudo[266185]: pam_unix(sudo:session): session closed for user root
Jan 23 10:23:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.650 249233 DEBUG nova.network.neutron [-] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.666 249233 INFO nova.compute.manager [-] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Took 2.21 seconds to deallocate network for instance.
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.708 249233 DEBUG oslo_concurrency.lockutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.708 249233 DEBUG oslo_concurrency.lockutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:55 compute-0 nova_compute[249229]: 2026-01-23 10:23:55.768 249233 DEBUG oslo_concurrency.processutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:23:55 compute-0 ceph-mon[74335]: pgmap v919: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 23 10:23:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:23:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:23:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:56.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:23:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2411302437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:56 compute-0 nova_compute[249229]: 2026-01-23 10:23:56.223 249233 DEBUG oslo_concurrency.processutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:23:56 compute-0 nova_compute[249229]: 2026-01-23 10:23:56.229 249233 DEBUG nova.compute.provider_tree [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:23:56 compute-0 nova_compute[249229]: 2026-01-23 10:23:56.248 249233 DEBUG nova.scheduler.client.report [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:23:56 compute-0 nova_compute[249229]: 2026-01-23 10:23:56.265 249233 DEBUG oslo_concurrency.lockutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:56 compute-0 nova_compute[249229]: 2026-01-23 10:23:56.289 249233 INFO nova.scheduler.client.report [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Deleted allocations for instance 63ed4545-8ad4-406e-be3b-3aaafb68fbcc
Jan 23 10:23:56 compute-0 nova_compute[249229]: 2026-01-23 10:23:56.345 249233 DEBUG oslo_concurrency.lockutils [None req-63589b0f-1795-4216-8252-619e49116cf4 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "63ed4545-8ad4-406e-be3b-3aaafb68fbcc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 61 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Jan 23 10:23:56 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2411302437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:23:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:23:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:23:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:57.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:23:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6440047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:23:57.785Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:23:57 compute-0 ceph-mon[74335]: pgmap v920: 353 pgs: 353 active+clean; 61 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Jan 23 10:23:57 compute-0 nova_compute[249229]: 2026-01-23 10:23:57.973 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:58 compute-0 nova_compute[249229]: 2026-01-23 10:23:58.084 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:23:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:58.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 737 KiB/s wr, 123 op/s
Jan 23 10:23:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:23:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:23:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:59.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:23:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:23:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:23:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:59.776 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:23:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:59.777 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:23:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:23:59.777 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:23:59 compute-0 ceph-mon[74335]: pgmap v921: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 737 KiB/s wr, 123 op/s
Jan 23 10:23:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:59] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:23:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:23:59] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:24:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.005000149s ======
Jan 23 10:24:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:00.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000149s
Jan 23 10:24:00 compute-0 podman[266239]: 2026-01-23 10:24:00.537325343 +0000 UTC m=+0.061699611 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 10:24:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 92 op/s
Jan 23 10:24:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:01.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:01 compute-0 ceph-mon[74335]: pgmap v922: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 92 op/s
Jan 23 10:24:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:02.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 93 op/s
Jan 23 10:24:02 compute-0 nova_compute[249229]: 2026-01-23 10:24:02.977 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:03.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:03 compute-0 nova_compute[249229]: 2026-01-23 10:24:03.088 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:03.679Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:03 compute-0 nova_compute[249229]: 2026-01-23 10:24:03.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:24:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:03 compute-0 ceph-mon[74335]: pgmap v923: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 93 op/s
Jan 23 10:24:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:04.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 23 10:24:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:24:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:05.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:24:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:24:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:24:05 compute-0 sudo[266265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:24:05 compute-0 sudo[266265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:24:05 compute-0 sudo[266265]: pam_unix(sudo:session): session closed for user root
Jan 23 10:24:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:05 compute-0 nova_compute[249229]: 2026-01-23 10:24:05.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:24:05 compute-0 nova_compute[249229]: 2026-01-23 10:24:05.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:24:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:05 compute-0 ceph-mon[74335]: pgmap v924: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 23 10:24:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:24:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:06.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 23 10:24:06 compute-0 nova_compute[249229]: 2026-01-23 10:24:06.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:24:06 compute-0 nova_compute[249229]: 2026-01-23 10:24:06.754 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:24:06 compute-0 nova_compute[249229]: 2026-01-23 10:24:06.754 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:24:06 compute-0 nova_compute[249229]: 2026-01-23 10:24:06.754 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:24:06 compute-0 nova_compute[249229]: 2026-01-23 10:24:06.755 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:24:06 compute-0 nova_compute[249229]: 2026-01-23 10:24:06.755 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:24:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:07.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3596100017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:24:07 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334627562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.240 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.398 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.401 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4529MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.402 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.402 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.477 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.477 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.495 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:24:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:07.785Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:24:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:07.786Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:24:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:07.786Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:24:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:24:07 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749144063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.943 249233 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163832.9427648, 63ed4545-8ad4-406e-be3b-3aaafb68fbcc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.944 249233 INFO nova.compute.manager [-] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] VM Stopped (Lifecycle Event)
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.950 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.956 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:24:07 compute-0 nova_compute[249229]: 2026-01-23 10:24:07.980 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:08 compute-0 nova_compute[249229]: 2026-01-23 10:24:08.092 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:08 compute-0 nova_compute[249229]: 2026-01-23 10:24:08.100 249233 DEBUG nova.compute.manager [None req-ca265552-ac38-450d-b899-860f488665f6 - - - - - -] [instance: 63ed4545-8ad4-406e-be3b-3aaafb68fbcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:24:08 compute-0 nova_compute[249229]: 2026-01-23 10:24:08.102 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:24:08 compute-0 ceph-mon[74335]: pgmap v925: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 23 10:24:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1334627562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1749144063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:08 compute-0 nova_compute[249229]: 2026-01-23 10:24:08.131 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:24:08 compute-0 nova_compute[249229]: 2026-01-23 10:24:08.132 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:24:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:08.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 34 op/s
Jan 23 10:24:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:09.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:09 compute-0 nova_compute[249229]: 2026-01-23 10:24:09.132 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:24:09 compute-0 nova_compute[249229]: 2026-01-23 10:24:09.133 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:24:09 compute-0 nova_compute[249229]: 2026-01-23 10:24:09.133 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:24:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1478679152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:09 compute-0 ceph-mon[74335]: pgmap v926: 353 pgs: 353 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 34 op/s
Jan 23 10:24:09 compute-0 nova_compute[249229]: 2026-01-23 10:24:09.151 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:24:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:09 compute-0 nova_compute[249229]: 2026-01-23 10:24:09.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:24:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:09] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:24:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:09] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:24:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3295464724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:10.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Jan 23 10:24:10 compute-0 nova_compute[249229]: 2026-01-23 10:24:10.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:24:10 compute-0 nova_compute[249229]: 2026-01-23 10:24:10.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:24:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:11.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2577585059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:11 compute-0 ceph-mon[74335]: pgmap v927: 353 pgs: 353 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Jan 23 10:24:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/287250540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/308211205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:24:11 compute-0 nova_compute[249229]: 2026-01-23 10:24:11.708 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:24:11 compute-0 nova_compute[249229]: 2026-01-23 10:24:11.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:24:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c000d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/469737160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:24:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:24:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:12.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:24:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 23 10:24:12 compute-0 nova_compute[249229]: 2026-01-23 10:24:12.984 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 10:24:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:13.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 10:24:13 compute-0 nova_compute[249229]: 2026-01-23 10:24:13.094 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:13 compute-0 ceph-mon[74335]: pgmap v928: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 23 10:24:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:13.681Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:24:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:13.681Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:24:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:13.682Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:24:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:14.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 23 10:24:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:15.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c001980 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:15 compute-0 ceph-mon[74335]: pgmap v929: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 23 10:24:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:16.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 23 10:24:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:17.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:17 compute-0 ceph-mon[74335]: pgmap v930: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 23 10:24:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102417 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:24:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c001980 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:17.787Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:18 compute-0 nova_compute[249229]: 2026-01-23 10:24:18.030 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:18 compute-0 nova_compute[249229]: 2026-01-23 10:24:18.096 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:18.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:24:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:19.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:19 compute-0 ceph-mon[74335]: pgmap v931: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:24:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:19] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:24:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:19] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:24:20
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.mgr', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'volumes', '.nfs', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data']
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:24:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:24:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:24:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:20.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:24:20 compute-0 podman[266352]: 2026-01-23 10:24:20.563407818 +0000 UTC m=+0.092233605 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 10:24:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 359 KiB/s wr, 86 op/s
Jan 23 10:24:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:24:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:21.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003be0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:21 compute-0 ceph-mon[74335]: pgmap v932: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 359 KiB/s wr, 86 op/s
Jan 23 10:24:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:22 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Check health
Jan 23 10:24:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:22.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 360 KiB/s wr, 113 op/s
Jan 23 10:24:23 compute-0 nova_compute[249229]: 2026-01-23 10:24:23.034 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:23.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:23 compute-0 nova_compute[249229]: 2026-01-23 10:24:23.097 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:23.683Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:24 compute-0 ceph-mon[74335]: pgmap v933: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 360 KiB/s wr, 113 op/s
Jan 23 10:24:24 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2724804691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 98 op/s
Jan 23 10:24:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:25.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:25 compute-0 sudo[266385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:24:25 compute-0 sudo[266385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:24:25 compute-0 sudo[266385]: pam_unix(sudo:session): session closed for user root
Jan 23 10:24:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:25 compute-0 ceph-mon[74335]: pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 98 op/s
Jan 23 10:24:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:26.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 98 op/s
Jan 23 10:24:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:27.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:27 compute-0 ceph-mon[74335]: pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 98 op/s
Jan 23 10:24:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:27.789Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:28 compute-0 nova_compute[249229]: 2026-01-23 10:24:28.074 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:28 compute-0 nova_compute[249229]: 2026-01-23 10:24:28.099 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:28.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 84 op/s
Jan 23 10:24:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102428 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:24:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:29.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:29 compute-0 ceph-mon[74335]: pgmap v936: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 84 op/s
Jan 23 10:24:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:29] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Jan 23 10:24:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:29] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Jan 23 10:24:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:30.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:30 compute-0 nova_compute[249229]: 2026-01-23 10:24:30.601 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:30 compute-0 nova_compute[249229]: 2026-01-23 10:24:30.682 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 23 10:24:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:24:30.924 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:24:30 compute-0 nova_compute[249229]: 2026-01-23 10:24:30.925 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:30 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:24:30.925 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:24:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:31.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:31 compute-0 podman[266417]: 2026-01-23 10:24:31.530420746 +0000 UTC m=+0.057705807 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 10:24:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:32.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:32 compute-0 ceph-mon[74335]: pgmap v937: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 23 10:24:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 23 10:24:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:33.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:33 compute-0 nova_compute[249229]: 2026-01-23 10:24:33.125 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:33 compute-0 ceph-mon[74335]: pgmap v938: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 23 10:24:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003be0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:33.684Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:24:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:33.685Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe648003be0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:34.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:24:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:24:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:24:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:35.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:35 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:24:35.926 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:24:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:36.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:24:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:37.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:37 compute-0 ceph-mon[74335]: pgmap v939: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 23 10:24:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:24:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:37.789Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:38 compute-0 nova_compute[249229]: 2026-01-23 10:24:38.127 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:24:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:38.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:24:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:38 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 23 10:24:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:39.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:39 compute-0 ceph-mon[74335]: pgmap v940: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:24:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:39] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 23 10:24:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:39] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 23 10:24:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:40.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:24:40 compute-0 ceph-mon[74335]: pgmap v941: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:24:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:42 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 23 10:24:42 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:42 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:24:42 compute-0 ceph-mon[74335]: pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 10:24:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:42.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Jan 23 10:24:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:43.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:43 compute-0 nova_compute[249229]: 2026-01-23 10:24:43.129 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:24:43 compute-0 ceph-mon[74335]: pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Jan 23 10:24:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:43.685Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:43 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:44.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 23 10:24:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:45 compute-0 sudo[266452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:24:45 compute-0 sudo[266452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:24:45 compute-0 sudo[266452]: pam_unix(sudo:session): session closed for user root
Jan 23 10:24:45 compute-0 ceph-mon[74335]: pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 23 10:24:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:45 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:45 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:46.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:24:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:47.791Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:47 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:48 compute-0 nova_compute[249229]: 2026-01-23 10:24:48.131 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:24:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:48.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:48 compute-0 ceph-mon[74335]: pgmap v945: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 23 10:24:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Jan 23 10:24:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=grafana.update.checker t=2026-01-23T10:24:48.749425388Z level=info msg="Update check succeeded" duration=52.366632ms
Jan 23 10:24:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=plugins.update.checker t=2026-01-23T10:24:48.752176418Z level=info msg="Update check succeeded" duration=57.899803ms
Jan 23 10:24:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=cleanup t=2026-01-23T10:24:48.752213829Z level=info msg="Completed cleanup jobs" duration=128.082489ms
Jan 23 10:24:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:24:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3038382262' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:24:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:24:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3038382262' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:24:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:49 compute-0 ceph-mon[74335]: pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Jan 23 10:24:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3038382262' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:24:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3038382262' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:24:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:49 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:49] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 23 10:24:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:49] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 23 10:24:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:24:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:24:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:24:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:24:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:24:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:24:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:24:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:24:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:50.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Jan 23 10:24:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:24:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:51.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:51 compute-0 podman[266483]: 2026-01-23 10:24:51.55164889 +0000 UTC m=+0.080039142 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 23 10:24:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:51 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:51 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:52 compute-0 ceph-mon[74335]: pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Jan 23 10:24:52 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:52 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:24:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 23 10:24:53 compute-0 ceph-mon[74335]: pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 23 10:24:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:53.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:53 compute-0 nova_compute[249229]: 2026-01-23 10:24:53.133 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:24:53 compute-0 nova_compute[249229]: 2026-01-23 10:24:53.134 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:24:53 compute-0 nova_compute[249229]: 2026-01-23 10:24:53.134 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 23 10:24:53 compute-0 nova_compute[249229]: 2026-01-23 10:24:53.134 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:24:53 compute-0 nova_compute[249229]: 2026-01-23 10:24:53.135 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:24:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638002550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:53.686Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:53 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:54.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:24:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:55.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:55 compute-0 sudo[266514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:24:55 compute-0 sudo[266514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:24:55 compute-0 sudo[266514]: pam_unix(sudo:session): session closed for user root
Jan 23 10:24:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:55 compute-0 sudo[266539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:24:55 compute-0 sudo[266539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:24:55 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:55 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:56 compute-0 ceph-mon[74335]: pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:24:56 compute-0 sudo[266539]: pam_unix(sudo:session): session closed for user root
Jan 23 10:24:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:56.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:56 compute-0 sudo[266595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:24:56 compute-0 sudo[266595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:24:56 compute-0 sudo[266595]: pam_unix(sudo:session): session closed for user root
Jan 23 10:24:56 compute-0 sudo[266620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 23 10:24:56 compute-0 sudo[266620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:24:56 compute-0 sudo[266620]: pam_unix(sudo:session): session closed for user root
Jan 23 10:24:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:24:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:24:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 23 10:24:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:24:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:24:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 10:24:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 10:24:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:24:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 23 10:24:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 10:24:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:57.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1853192220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:24:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:57 compute-0 ceph-mon[74335]: pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 23 10:24:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 10:24:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:57 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 10:24:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:24:57.792Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:24:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:57 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:24:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:24:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:58 compute-0 nova_compute[249229]: 2026-01-23 10:24:58.137 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:24:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:24:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:24:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 23 10:24:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:24:59 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:59 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:24:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:24:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:24:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:24:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:59.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:24:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:24:59.777 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:24:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:24:59.778 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:24:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:24:59.778 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:24:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:24:59 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:24:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:59] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:24:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:24:59] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:25:00 compute-0 ceph-mon[74335]: pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 23 10:25:00 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:00 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 23 10:25:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:25:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:25:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 23 10:25:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 10:25:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:25:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:25:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:25:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:25:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Jan 23 10:25:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:25:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:25:01 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:25:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:25:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:25:01 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:25:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:25:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:25:01 compute-0 sudo[266671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:25:01 compute-0 sudo[266671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:01 compute-0 sudo[266671]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:01.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:01 compute-0 sudo[266696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:25:01 compute-0 sudo[266696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:01 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 23 10:25:01 compute-0 ceph-mon[74335]: pgmap v952: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/207192348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:25:01 compute-0 ceph-mon[74335]: pgmap v953: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:25:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:25:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:01 compute-0 podman[266761]: 2026-01-23 10:25:01.488462716 +0000 UTC m=+0.022335286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:25:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:01 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:01 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:01 compute-0 podman[266761]: 2026-01-23 10:25:01.854340392 +0000 UTC m=+0.388212942 container create f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_mccarthy, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:25:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:02 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:02 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:25:02 compute-0 systemd[1]: Started libpod-conmon-f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a.scope.
Jan 23 10:25:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:25:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:25:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:02.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:25:02 compute-0 podman[266761]: 2026-01-23 10:25:02.297130119 +0000 UTC m=+0.831002689 container init f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_mccarthy, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 10:25:02 compute-0 ceph-mon[74335]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 23 10:25:02 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2995660539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:25:02 compute-0 podman[266761]: 2026-01-23 10:25:02.305670856 +0000 UTC m=+0.839543406 container start f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:25:02 compute-0 recursing_mccarthy[266790]: 167 167
Jan 23 10:25:02 compute-0 systemd[1]: libpod-f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a.scope: Deactivated successfully.
Jan 23 10:25:02 compute-0 conmon[266790]: conmon f72f78c110878796981a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a.scope/container/memory.events
Jan 23 10:25:02 compute-0 podman[266761]: 2026-01-23 10:25:02.366461342 +0000 UTC m=+0.900333912 container attach f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_mccarthy, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:25:02 compute-0 podman[266761]: 2026-01-23 10:25:02.367165182 +0000 UTC m=+0.901037732 container died f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_mccarthy, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb4785b265a55010b5440b34ffdf5ad49f7f3ccafc5aea5dfb8c701f7d020585-merged.mount: Deactivated successfully.
Jan 23 10:25:02 compute-0 podman[266761]: 2026-01-23 10:25:02.428703499 +0000 UTC m=+0.962576049 container remove f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_mccarthy, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:25:02 compute-0 systemd[1]: libpod-conmon-f72f78c110878796981a4a5fe498bf5ec9a2980961d335c5913ac9436949642a.scope: Deactivated successfully.
Jan 23 10:25:02 compute-0 podman[266777]: 2026-01-23 10:25:02.472206425 +0000 UTC m=+0.566611733 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 10:25:02 compute-0 podman[266826]: 2026-01-23 10:25:02.594520768 +0000 UTC m=+0.046953397 container create aefc17fbfdb2fe2eb4d84cd95a9e8f6fed9372c0c841327ab4cbf25a90474b0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Jan 23 10:25:02 compute-0 systemd[1]: Started libpod-conmon-aefc17fbfdb2fe2eb4d84cd95a9e8f6fed9372c0c841327ab4cbf25a90474b0a.scope.
Jan 23 10:25:02 compute-0 podman[266826]: 2026-01-23 10:25:02.574098548 +0000 UTC m=+0.026531167 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:25:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296b9e4e384c44a3c531850aad790ba0d7d7edd93490a8b028639e504c78e826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296b9e4e384c44a3c531850aad790ba0d7d7edd93490a8b028639e504c78e826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296b9e4e384c44a3c531850aad790ba0d7d7edd93490a8b028639e504c78e826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296b9e4e384c44a3c531850aad790ba0d7d7edd93490a8b028639e504c78e826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/296b9e4e384c44a3c531850aad790ba0d7d7edd93490a8b028639e504c78e826/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:02 compute-0 podman[266826]: 2026-01-23 10:25:02.699251072 +0000 UTC m=+0.151683691 container init aefc17fbfdb2fe2eb4d84cd95a9e8f6fed9372c0c841327ab4cbf25a90474b0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gates, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:25:02 compute-0 podman[266826]: 2026-01-23 10:25:02.707242413 +0000 UTC m=+0.159675022 container start aefc17fbfdb2fe2eb4d84cd95a9e8f6fed9372c0c841327ab4cbf25a90474b0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:25:02 compute-0 podman[266826]: 2026-01-23 10:25:02.710833387 +0000 UTC m=+0.163266016 container attach aefc17fbfdb2fe2eb4d84cd95a9e8f6fed9372c0c841327ab4cbf25a90474b0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 10:25:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Jan 23 10:25:03 compute-0 priceless_gates[266843]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:25:03 compute-0 priceless_gates[266843]: --> All data devices are unavailable
Jan 23 10:25:03 compute-0 systemd[1]: libpod-aefc17fbfdb2fe2eb4d84cd95a9e8f6fed9372c0c841327ab4cbf25a90474b0a.scope: Deactivated successfully.
Jan 23 10:25:03 compute-0 podman[266826]: 2026-01-23 10:25:03.062038659 +0000 UTC m=+0.514471248 container died aefc17fbfdb2fe2eb4d84cd95a9e8f6fed9372c0c841327ab4cbf25a90474b0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gates, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:25:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:03.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:03 compute-0 nova_compute[249229]: 2026-01-23 10:25:03.140 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:25:03 compute-0 nova_compute[249229]: 2026-01-23 10:25:03.142 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:25:03 compute-0 nova_compute[249229]: 2026-01-23 10:25:03.143 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 23 10:25:03 compute-0 nova_compute[249229]: 2026-01-23 10:25:03.143 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:25:03 compute-0 nova_compute[249229]: 2026-01-23 10:25:03.157 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:03 compute-0 nova_compute[249229]: 2026-01-23 10:25:03.158 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-296b9e4e384c44a3c531850aad790ba0d7d7edd93490a8b028639e504c78e826-merged.mount: Deactivated successfully.
Jan 23 10:25:03 compute-0 podman[266826]: 2026-01-23 10:25:03.352568268 +0000 UTC m=+0.805000867 container remove aefc17fbfdb2fe2eb4d84cd95a9e8f6fed9372c0c841327ab4cbf25a90474b0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_gates, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 10:25:03 compute-0 systemd[1]: libpod-conmon-aefc17fbfdb2fe2eb4d84cd95a9e8f6fed9372c0c841327ab4cbf25a90474b0a.scope: Deactivated successfully.
Jan 23 10:25:03 compute-0 sudo[266696]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:03 compute-0 sudo[266870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:25:03 compute-0 sudo[266870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:03 compute-0 sudo[266870]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:03 compute-0 sudo[266895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:25:03 compute-0 sudo[266895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:03.687Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:03 compute-0 nova_compute[249229]: 2026-01-23 10:25:03.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:03 compute-0 ceph-mon[74335]: pgmap v954: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Jan 23 10:25:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:03 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:03 compute-0 podman[266962]: 2026-01-23 10:25:03.866115499 +0000 UTC m=+0.025773225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:25:04 compute-0 podman[266962]: 2026-01-23 10:25:04.003690701 +0000 UTC m=+0.163348447 container create ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:25:04 compute-0 systemd[1]: Started libpod-conmon-ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e.scope.
Jan 23 10:25:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:25:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:04.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:04 compute-0 nova_compute[249229]: 2026-01-23 10:25:04.709 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:04 compute-0 podman[266962]: 2026-01-23 10:25:04.754145022 +0000 UTC m=+0.913802798 container init ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 10:25:04 compute-0 podman[266962]: 2026-01-23 10:25:04.762000739 +0000 UTC m=+0.921658445 container start ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:25:04 compute-0 nostalgic_lederberg[266978]: 167 167
Jan 23 10:25:04 compute-0 systemd[1]: libpod-ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e.scope: Deactivated successfully.
Jan 23 10:25:04 compute-0 conmon[266978]: conmon ed82c06eef5ded792b08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e.scope/container/memory.events
Jan 23 10:25:04 compute-0 podman[266962]: 2026-01-23 10:25:04.857781695 +0000 UTC m=+1.017439511 container attach ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:25:04 compute-0 podman[266962]: 2026-01-23 10:25:04.8583237 +0000 UTC m=+1.017981406 container died ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2c4e764a8ba7febd6787c1f2b3491d31cc45d087d8efec32551a1b3e637e709-merged.mount: Deactivated successfully.
Jan 23 10:25:04 compute-0 podman[266962]: 2026-01-23 10:25:04.904409991 +0000 UTC m=+1.064067697 container remove ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 10:25:04 compute-0 systemd[1]: libpod-conmon-ed82c06eef5ded792b0858c5f0422cbabfef3dfa265b1986d697c0104afbb43e.scope: Deactivated successfully.
Jan 23 10:25:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Jan 23 10:25:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 23 10:25:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:05 compute-0 podman[267005]: 2026-01-23 10:25:05.050662265 +0000 UTC m=+0.024247582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:25:05 compute-0 podman[267005]: 2026-01-23 10:25:05.425188881 +0000 UTC m=+0.398774178 container create a2674b4d34630232c8e6b3e054c1f8bb00eee3da4e72773814643377a6debff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:25:05 compute-0 sudo[267019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:25:05 compute-0 sudo[267019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:05 compute-0 sudo[267019]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:05 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:05 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:06.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:06 compute-0 systemd[1]: Started libpod-conmon-a2674b4d34630232c8e6b3e054c1f8bb00eee3da4e72773814643377a6debff8.scope.
Jan 23 10:25:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17db936edfdef0a9fbe523a3b2129f62e45b74910ba8918e9353a7932525433/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17db936edfdef0a9fbe523a3b2129f62e45b74910ba8918e9353a7932525433/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17db936edfdef0a9fbe523a3b2129f62e45b74910ba8918e9353a7932525433/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17db936edfdef0a9fbe523a3b2129f62e45b74910ba8918e9353a7932525433/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:06 compute-0 podman[267005]: 2026-01-23 10:25:06.573634125 +0000 UTC m=+1.547219422 container init a2674b4d34630232c8e6b3e054c1f8bb00eee3da4e72773814643377a6debff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:25:06 compute-0 podman[267005]: 2026-01-23 10:25:06.580898475 +0000 UTC m=+1.554483782 container start a2674b4d34630232c8e6b3e054c1f8bb00eee3da4e72773814643377a6debff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 10:25:06 compute-0 podman[267005]: 2026-01-23 10:25:06.584507489 +0000 UTC m=+1.558092866 container attach a2674b4d34630232c8e6b3e054c1f8bb00eee3da4e72773814643377a6debff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:25:06 compute-0 loving_hopper[267047]: {
Jan 23 10:25:06 compute-0 loving_hopper[267047]:     "1": [
Jan 23 10:25:06 compute-0 loving_hopper[267047]:         {
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "devices": [
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "/dev/loop3"
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             ],
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "lv_name": "ceph_lv0",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "lv_size": "21470642176",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "name": "ceph_lv0",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "tags": {
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.cluster_name": "ceph",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.crush_device_class": "",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.encrypted": "0",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.osd_id": "1",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.type": "block",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.vdo": "0",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:                 "ceph.with_tpm": "0"
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             },
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "type": "block",
Jan 23 10:25:06 compute-0 loving_hopper[267047]:             "vg_name": "ceph_vg0"
Jan 23 10:25:06 compute-0 loving_hopper[267047]:         }
Jan 23 10:25:06 compute-0 loving_hopper[267047]:     ]
Jan 23 10:25:06 compute-0 loving_hopper[267047]: }
Jan 23 10:25:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:06 compute-0 systemd[1]: libpod-a2674b4d34630232c8e6b3e054c1f8bb00eee3da4e72773814643377a6debff8.scope: Deactivated successfully.
Jan 23 10:25:06 compute-0 podman[267005]: 2026-01-23 10:25:06.90689279 +0000 UTC m=+1.880478087 container died a2674b4d34630232c8e6b3e054c1f8bb00eee3da4e72773814643377a6debff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:25:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 23 10:25:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:07.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a17db936edfdef0a9fbe523a3b2129f62e45b74910ba8918e9353a7932525433-merged.mount: Deactivated successfully.
Jan 23 10:25:07 compute-0 podman[267005]: 2026-01-23 10:25:07.57593961 +0000 UTC m=+2.549524927 container remove a2674b4d34630232c8e6b3e054c1f8bb00eee3da4e72773814643377a6debff8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hopper, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:25:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:07 compute-0 systemd[1]: libpod-conmon-a2674b4d34630232c8e6b3e054c1f8bb00eee3da4e72773814643377a6debff8.scope: Deactivated successfully.
Jan 23 10:25:07 compute-0 sudo[266895]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:07 compute-0 sudo[267068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:25:07 compute-0 sudo[267068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:07 compute-0 sudo[267068]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:07 compute-0 nova_compute[249229]: 2026-01-23 10:25:07.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:07 compute-0 nova_compute[249229]: 2026-01-23 10:25:07.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:25:07 compute-0 nova_compute[249229]: 2026-01-23 10:25:07.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:25:07 compute-0 sudo[267093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:25:07 compute-0 sudo[267093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:07 compute-0 nova_compute[249229]: 2026-01-23 10:25:07.782 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:25:07 compute-0 nova_compute[249229]: 2026-01-23 10:25:07.782 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:07 compute-0 nova_compute[249229]: 2026-01-23 10:25:07.783 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:25:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:07.793Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:25:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:07.793Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:07 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:07 compute-0 ceph-mon[74335]: pgmap v955: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Jan 23 10:25:07 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:25:07 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:25:08 compute-0 nova_compute[249229]: 2026-01-23 10:25:08.159 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:08 compute-0 podman[267159]: 2026-01-23 10:25:08.184977828 +0000 UTC m=+0.038456062 container create 1f19c63b155d9f4c44fd900a1b0c22b175a057544a00378905b54ed47ed03cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 10:25:08 compute-0 systemd[1]: Started libpod-conmon-1f19c63b155d9f4c44fd900a1b0c22b175a057544a00378905b54ed47ed03cf8.scope.
Jan 23 10:25:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:25:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:08.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:08 compute-0 podman[267159]: 2026-01-23 10:25:08.167830133 +0000 UTC m=+0.021308387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:25:08 compute-0 podman[267159]: 2026-01-23 10:25:08.267921583 +0000 UTC m=+0.121399827 container init 1f19c63b155d9f4c44fd900a1b0c22b175a057544a00378905b54ed47ed03cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:25:08 compute-0 podman[267159]: 2026-01-23 10:25:08.274627627 +0000 UTC m=+0.128105851 container start 1f19c63b155d9f4c44fd900a1b0c22b175a057544a00378905b54ed47ed03cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:25:08 compute-0 adoring_murdock[267176]: 167 167
Jan 23 10:25:08 compute-0 systemd[1]: libpod-1f19c63b155d9f4c44fd900a1b0c22b175a057544a00378905b54ed47ed03cf8.scope: Deactivated successfully.
Jan 23 10:25:08 compute-0 podman[267159]: 2026-01-23 10:25:08.28512898 +0000 UTC m=+0.138607294 container attach 1f19c63b155d9f4c44fd900a1b0c22b175a057544a00378905b54ed47ed03cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:25:08 compute-0 podman[267159]: 2026-01-23 10:25:08.286124609 +0000 UTC m=+0.139602893 container died 1f19c63b155d9f4c44fd900a1b0c22b175a057544a00378905b54ed47ed03cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c25179db574b6d10cfbfd20402684e46cd398837b2d9a7ac84c9d96161787fd5-merged.mount: Deactivated successfully.
Jan 23 10:25:08 compute-0 podman[267159]: 2026-01-23 10:25:08.325461425 +0000 UTC m=+0.178939659 container remove 1f19c63b155d9f4c44fd900a1b0c22b175a057544a00378905b54ed47ed03cf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 10:25:08 compute-0 systemd[1]: libpod-conmon-1f19c63b155d9f4c44fd900a1b0c22b175a057544a00378905b54ed47ed03cf8.scope: Deactivated successfully.
Jan 23 10:25:08 compute-0 podman[267202]: 2026-01-23 10:25:08.509775228 +0000 UTC m=+0.046500344 container create ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bardeen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 23 10:25:08 compute-0 systemd[1]: Started libpod-conmon-ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1.scope.
Jan 23 10:25:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fb772b7e37cb511d06d18a07eceda5e1073cc4d3442a964e74816df1aac886/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:08 compute-0 podman[267202]: 2026-01-23 10:25:08.486626339 +0000 UTC m=+0.023351475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fb772b7e37cb511d06d18a07eceda5e1073cc4d3442a964e74816df1aac886/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fb772b7e37cb511d06d18a07eceda5e1073cc4d3442a964e74816df1aac886/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fb772b7e37cb511d06d18a07eceda5e1073cc4d3442a964e74816df1aac886/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:25:08 compute-0 podman[267202]: 2026-01-23 10:25:08.598294814 +0000 UTC m=+0.135019990 container init ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:25:08 compute-0 podman[267202]: 2026-01-23 10:25:08.606649245 +0000 UTC m=+0.143374361 container start ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 10:25:08 compute-0 podman[267202]: 2026-01-23 10:25:08.668172532 +0000 UTC m=+0.204897678 container attach ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bardeen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 23 10:25:08 compute-0 nova_compute[249229]: 2026-01-23 10:25:08.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:08 compute-0 nova_compute[249229]: 2026-01-23 10:25:08.790 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:25:08 compute-0 nova_compute[249229]: 2026-01-23 10:25:08.791 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:25:08 compute-0 nova_compute[249229]: 2026-01-23 10:25:08.791 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:25:08 compute-0 nova_compute[249229]: 2026-01-23 10:25:08.791 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:25:08 compute-0 nova_compute[249229]: 2026-01-23 10:25:08.792 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:25:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 23 10:25:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:25:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1377665100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:09 compute-0 nova_compute[249229]: 2026-01-23 10:25:09.299 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:25:09 compute-0 lvm[267316]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:25:09 compute-0 lvm[267316]: VG ceph_vg0 finished
Jan 23 10:25:09 compute-0 trusting_bardeen[267220]: {}
Jan 23 10:25:09 compute-0 systemd[1]: libpod-ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1.scope: Deactivated successfully.
Jan 23 10:25:09 compute-0 systemd[1]: libpod-ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1.scope: Consumed 1.203s CPU time.
Jan 23 10:25:09 compute-0 podman[267202]: 2026-01-23 10:25:09.387663219 +0000 UTC m=+0.924388335 container died ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:25:09 compute-0 nova_compute[249229]: 2026-01-23 10:25:09.457 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:25:09 compute-0 nova_compute[249229]: 2026-01-23 10:25:09.458 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4476MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:25:09 compute-0 nova_compute[249229]: 2026-01-23 10:25:09.458 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:25:09 compute-0 nova_compute[249229]: 2026-01-23 10:25:09.459 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:25:09 compute-0 ceph-mon[74335]: pgmap v956: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 23 10:25:09 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:09 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:25:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3677633277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-62fb772b7e37cb511d06d18a07eceda5e1073cc4d3442a964e74816df1aac886-merged.mount: Deactivated successfully.
Jan 23 10:25:09 compute-0 nova_compute[249229]: 2026-01-23 10:25:09.738 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:25:09 compute-0 nova_compute[249229]: 2026-01-23 10:25:09.738 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:25:09 compute-0 nova_compute[249229]: 2026-01-23 10:25:09.755 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:25:09 compute-0 podman[267202]: 2026-01-23 10:25:09.785094556 +0000 UTC m=+1.321819672 container remove ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bardeen, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:25:09 compute-0 sudo[267093]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:09 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:25:09 compute-0 systemd[1]: libpod-conmon-ec2be422bb1c1b0b799bfa0ddc208f76f6601f1f548415e8148c907e22022fc1.scope: Deactivated successfully.
Jan 23 10:25:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:09] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 23 10:25:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:09] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 23 10:25:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:25:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3204813948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:10 compute-0 nova_compute[249229]: 2026-01-23 10:25:10.214 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:25:10 compute-0 nova_compute[249229]: 2026-01-23 10:25:10.223 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:25:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:10.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:25:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:10 compute-0 nova_compute[249229]: 2026-01-23 10:25:10.391 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:25:10 compute-0 nova_compute[249229]: 2026-01-23 10:25:10.394 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:25:10 compute-0 nova_compute[249229]: 2026-01-23 10:25:10.394 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:25:10 compute-0 sudo[267355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:25:10 compute-0 sudo[267355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:10 compute-0 sudo[267355]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:10 compute-0 ceph-mon[74335]: pgmap v957: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 23 10:25:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1377665100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3681678963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3204813948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:10 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:25:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1091720736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Jan 23 10:25:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:11.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:11 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:12 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:25:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:12.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:12 compute-0 nova_compute[249229]: 2026-01-23 10:25:12.395 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:12 compute-0 nova_compute[249229]: 2026-01-23 10:25:12.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:12 compute-0 nova_compute[249229]: 2026-01-23 10:25:12.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1971763186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.014978) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163913015174, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1150, "num_deletes": 251, "total_data_size": 2110898, "memory_usage": 2152576, "flush_reason": "Manual Compaction"}
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163913033631, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 2012832, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28568, "largest_seqno": 29717, "table_properties": {"data_size": 2007176, "index_size": 2987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12560, "raw_average_key_size": 20, "raw_value_size": 1995757, "raw_average_value_size": 3234, "num_data_blocks": 128, "num_entries": 617, "num_filter_entries": 617, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163822, "oldest_key_time": 1769163822, "file_creation_time": 1769163913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 18696 microseconds, and 8229 cpu microseconds.
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.033688) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 2012832 bytes OK
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.033717) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.036053) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.036088) EVENT_LOG_v1 {"time_micros": 1769163913036081, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.036112) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2105621, prev total WAL file size 2105621, number of live WAL files 2.
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.037143) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1965KB)], [62(12MB)]
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163913037276, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 14774036, "oldest_snapshot_seqno": -1}
Jan 23 10:25:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:13.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:13 compute-0 nova_compute[249229]: 2026-01-23 10:25:13.160 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:25:13 compute-0 nova_compute[249229]: 2026-01-23 10:25:13.161 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:13 compute-0 nova_compute[249229]: 2026-01-23 10:25:13.161 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 23 10:25:13 compute-0 nova_compute[249229]: 2026-01-23 10:25:13.162 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:25:13 compute-0 nova_compute[249229]: 2026-01-23 10:25:13.162 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:25:13 compute-0 nova_compute[249229]: 2026-01-23 10:25:13.163 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5877 keys, 12616449 bytes, temperature: kUnknown
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163913298567, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12616449, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12578605, "index_size": 22054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 151986, "raw_average_key_size": 25, "raw_value_size": 12473816, "raw_average_value_size": 2122, "num_data_blocks": 881, "num_entries": 5877, "num_filter_entries": 5877, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769163913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.298817) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12616449 bytes
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.301411) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.5 rd, 48.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.2 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.6) write-amplify(6.3) OK, records in: 6398, records dropped: 521 output_compression: NoCompression
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.301432) EVENT_LOG_v1 {"time_micros": 1769163913301421, "job": 34, "event": "compaction_finished", "compaction_time_micros": 261368, "compaction_time_cpu_micros": 29401, "output_level": 6, "num_output_files": 1, "total_output_size": 12616449, "num_input_records": 6398, "num_output_records": 5877, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163913301903, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163913304561, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.036978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.304649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.304665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.304667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.304669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:25:13 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:25:13.304671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:25:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:13.688Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:25:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:13.688Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:13 compute-0 nova_compute[249229]: 2026-01-23 10:25:13.708 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:13 compute-0 nova_compute[249229]: 2026-01-23 10:25:13.715 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:25:13 compute-0 ceph-mon[74335]: pgmap v958: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Jan 23 10:25:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:13 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:13 compute-0 ovn_controller[151634]: 2026-01-23T10:25:13Z|00060|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 23 10:25:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:14.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:14 compute-0 ceph-mon[74335]: pgmap v959: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:25:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:25:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:15.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:15 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:16.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:16 compute-0 ceph-mon[74335]: pgmap v960: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:25:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:25:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe634004bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:17.794Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:17 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:18 compute-0 nova_compute[249229]: 2026-01-23 10:25:18.163 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:18.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:18 compute-0 ceph-mon[74335]: pgmap v961: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:25:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 23 10:25:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe63400c7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:19 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe63400c7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:19] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 23 10:25:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:19] "GET /metrics HTTP/1.1" 200 48487 "" "Prometheus/2.51.0"
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:25:20
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.mgr', '.nfs', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root']
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:25:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:25:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:25:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:25:20 compute-0 ceph-mon[74335]: pgmap v962: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 23 10:25:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 23 10:25:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:21.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe63400c7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:21 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:21 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:22 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:22 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:25:22 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:25:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:22.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:22 compute-0 podman[267392]: 2026-01-23 10:25:22.57767655 +0000 UTC m=+0.100635877 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 10:25:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 10:25:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:23 compute-0 nova_compute[249229]: 2026-01-23 10:25:23.165 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe63400c7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:23.690Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:23 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:23 compute-0 ceph-mon[74335]: pgmap v963: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 23 10:25:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:24.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 10:25:25 compute-0 ceph-mon[74335]: pgmap v964: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 10:25:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:25 compute-0 sudo[267421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:25:25 compute-0 sudo[267421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:25 compute-0 sudo[267421]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:25 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:25 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe63400c7a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:26.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:26 compute-0 ceph-mon[74335]: pgmap v965: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 10:25:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:25:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:27.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:27.795Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:27 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6480048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:28 compute-0 nova_compute[249229]: 2026-01-23 10:25:28.167 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:28.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:28 compute-0 ceph-mon[74335]: pgmap v966: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:25:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 10:25:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe620001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:29 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:29] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:25:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:29] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:25:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:30.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:30 compute-0 ceph-mon[74335]: pgmap v967: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 10:25:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:25:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:31.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644002b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:31 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:31 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:25:32.040 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:25:32 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:25:32.042 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:25:32 compute-0 nova_compute[249229]: 2026-01-23 10:25:32.041 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:32 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:32 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 23 10:25:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:32.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:32 compute-0 ceph-mon[74335]: pgmap v968: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:25:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 6 op/s
Jan 23 10:25:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:33.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:33 compute-0 nova_compute[249229]: 2026-01-23 10:25:33.168 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:33 compute-0 podman[267457]: 2026-01-23 10:25:33.526211182 +0000 UTC m=+0.051730645 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:25:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644002b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:33.691Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:25:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:33.691Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:25:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:33.691Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:33 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c0022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:34.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:34 compute-0 ceph-mon[74335]: pgmap v969: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 6 op/s
Jan 23 10:25:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 6 op/s
Jan 23 10:25:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:25:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:25:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:35 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:35 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe644002b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:25:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:36.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:36 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:25:36 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:25:36 compute-0 ceph-mon[74335]: pgmap v970: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 6 op/s
Jan 23 10:25:36 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/976421879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 15 KiB/s wr, 34 op/s
Jan 23 10:25:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:37.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c0022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:37.795Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:37 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:38 compute-0 nova_compute[249229]: 2026-01-23 10:25:38.170 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:38.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:38 compute-0 ceph-mon[74335]: pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 15 KiB/s wr, 34 op/s
Jan 23 10:25:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Jan 23 10:25:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe62c0022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:39 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe61c0046d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 23 10:25:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:39] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:25:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:39] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:25:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:40.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:40 compute-0 ceph-mon[74335]: pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Jan 23 10:25:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 29 op/s
Jan 23 10:25:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:25:41.044 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:25:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:41 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu[262782]: 23/01/2026 10:25:41 : epoch 69734b92 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe638001090 fd 38 proxy ignored for local
Jan 23 10:25:41 compute-0 kernel: ganesha.nfsd[267451]: segfault at 50 ip 00007fe6d8f8832e sp 00007fe641ffa210 error 4 in libntirpc.so.5.8[7fe6d8f6d000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 23 10:25:41 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 23 10:25:41 compute-0 systemd[1]: Started Process Core Dump (PID 267486/UID 0).
Jan 23 10:25:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:25:43 compute-0 ceph-mon[74335]: pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 29 op/s
Jan 23 10:25:43 compute-0 systemd-coredump[267487]: Process 262786 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 74:
                                                    #0  0x00007fe6d8f8832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 23 10:25:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:43.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:43 compute-0 nova_compute[249229]: 2026-01-23 10:25:43.171 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:25:43 compute-0 nova_compute[249229]: 2026-01-23 10:25:43.173 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:43 compute-0 systemd[1]: systemd-coredump@11-267486-0.service: Deactivated successfully.
Jan 23 10:25:43 compute-0 systemd[1]: systemd-coredump@11-267486-0.service: Consumed 1.364s CPU time.
Jan 23 10:25:43 compute-0 podman[267494]: 2026-01-23 10:25:43.269964314 +0000 UTC m=+0.022999755 container died 7431735b9f593f91a26e051f7d5d7ca98041b8e4ab84f7742a2cafcd1841742a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-58c13e6d917960ee83c3d51fccbaf7e19e7614ad0577b76ac381cc631930ca60-merged.mount: Deactivated successfully.
Jan 23 10:25:43 compute-0 podman[267494]: 2026-01-23 10:25:43.306115807 +0000 UTC m=+0.059151228 container remove 7431735b9f593f91a26e051f7d5d7ca98041b8e4ab84f7742a2cafcd1841742a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-nfs-cephfs-2-0-compute-0-fenqiu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 10:25:43 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Main process exited, code=exited, status=139/n/a
Jan 23 10:25:43 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:25:43 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.979s CPU time.
Jan 23 10:25:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:43.692Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:25:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:43.692Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:25:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:43.693Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:44 compute-0 ceph-mon[74335]: pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:25:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:44.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:25:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:45.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:45 compute-0 sudo[267539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:25:45 compute-0 sudo[267539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:25:45 compute-0 sudo[267539]: pam_unix(sudo:session): session closed for user root
Jan 23 10:25:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:46.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:46 compute-0 ceph-mon[74335]: pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:25:46 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 23 10:25:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:25:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:47.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [WARNING] 022/102547 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 23 10:25:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal[95465]: [ALERT] 022/102547 (4) : backend 'backend' has no server available!
Jan 23 10:25:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:47.797Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:48 compute-0 nova_compute[249229]: 2026-01-23 10:25:48.173 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:25:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:48.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:48 compute-0 ceph-mon[74335]: pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:25:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:25:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/384635064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:25:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:25:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/384635064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:25:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:25:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:49.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/384635064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:25:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/384635064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:25:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:49] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:25:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:49] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:25:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:25:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:25:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:25:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:25:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:25:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:25:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:25:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:25:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:25:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:50.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:25:50 compute-0 ceph-mon[74335]: pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:25:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:25:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:25:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:51.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:52.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:52 compute-0 ceph-mon[74335]: pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:25:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:25:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:53.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:53 compute-0 nova_compute[249229]: 2026-01-23 10:25:53.175 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:25:53 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Scheduled restart job, restart counter is at 12.
Jan 23 10:25:53 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:25:53 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Consumed 1.979s CPU time.
Jan 23 10:25:53 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Start request repeated too quickly.
Jan 23 10:25:53 compute-0 systemd[1]: ceph-f3005f84-239a-55b6-a948-8f1fb592b920@nfs.cephfs.2.0.compute-0.fenqiu.service: Failed with result 'exit-code'.
Jan 23 10:25:53 compute-0 systemd[1]: Failed to start Ceph nfs.cephfs.2.0.compute-0.fenqiu for f3005f84-239a-55b6-a948-8f1fb592b920.
Jan 23 10:25:53 compute-0 podman[267573]: 2026-01-23 10:25:53.565651965 +0000 UTC m=+0.088371703 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 10:25:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:53.694Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:54.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:54 compute-0 ceph-mon[74335]: pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:25:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4284066152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:25:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:25:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:25:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:55.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:25:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:56.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:56 compute-0 ceph-mon[74335]: pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:25:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:25:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:25:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:57.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:57.798Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:25:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:25:57.799Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:25:58 compute-0 nova_compute[249229]: 2026-01-23 10:25:58.178 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:25:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:58.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:58 compute-0 ceph-mgr[74633]: [dashboard INFO request] [192.168.122.100:55076] [POST] [200] [0.005s] [4.0B] [3e6b565f-8072-44e9-9ffb-8180884ea386] /api/prometheus_receiver
Jan 23 10:25:58 compute-0 ceph-mon[74335]: pgmap v981: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:25:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1766427025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:25:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:25:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:25:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:25:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:25:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:25:59.778 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:25:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:25:59.778 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:25:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:25:59.778 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:25:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2805498511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:25:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:59] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:25:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:25:59] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:26:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:00 compute-0 ceph-mon[74335]: pgmap v982: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:26:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:26:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:01.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:02 compute-0 ceph-mon[74335]: pgmap v983: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:26:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:02.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:26:03 compute-0 nova_compute[249229]: 2026-01-23 10:26:03.179 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:26:03 compute-0 nova_compute[249229]: 2026-01-23 10:26:03.181 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:03 compute-0 nova_compute[249229]: 2026-01-23 10:26:03.181 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 23 10:26:03 compute-0 nova_compute[249229]: 2026-01-23 10:26:03.182 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:26:03 compute-0 nova_compute[249229]: 2026-01-23 10:26:03.182 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:26:03 compute-0 nova_compute[249229]: 2026-01-23 10:26:03.183 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:03.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:03.695Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:26:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:03.695Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:26:04 compute-0 ceph-mon[74335]: pgmap v984: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:26:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:04.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:04 compute-0 podman[267610]: 2026-01-23 10:26:04.522766848 +0000 UTC m=+0.050476439 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:26:04 compute-0 nova_compute[249229]: 2026-01-23 10:26:04.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:26:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:26:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:26:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:26:05 compute-0 sudo[267628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:26:05 compute-0 sudo[267628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:05 compute-0 sudo[267628]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:06.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:26:07 compute-0 ceph-mon[74335]: pgmap v985: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 10:26:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:07.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:07 compute-0 nova_compute[249229]: 2026-01-23 10:26:07.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:07 compute-0 nova_compute[249229]: 2026-01-23 10:26:07.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 10:26:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:07.800Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:08 compute-0 nova_compute[249229]: 2026-01-23 10:26:08.184 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:08.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:08 compute-0 ceph-mon[74335]: pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 10:26:08 compute-0 nova_compute[249229]: 2026-01-23 10:26:08.735 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:08 compute-0 nova_compute[249229]: 2026-01-23 10:26:08.773 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:26:08 compute-0 nova_compute[249229]: 2026-01-23 10:26:08.773 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:26:08 compute-0 nova_compute[249229]: 2026-01-23 10:26:08.773 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:26:08 compute-0 nova_compute[249229]: 2026-01-23 10:26:08.773 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:26:08 compute-0 nova_compute[249229]: 2026-01-23 10:26:08.774 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:26:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:08.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:26:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:09.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:26:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720359449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:26:09 compute-0 nova_compute[249229]: 2026-01-23 10:26:09.258 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:26:09 compute-0 nova_compute[249229]: 2026-01-23 10:26:09.424 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:26:09 compute-0 nova_compute[249229]: 2026-01-23 10:26:09.425 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4629MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:26:09 compute-0 nova_compute[249229]: 2026-01-23 10:26:09.426 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:26:09 compute-0 nova_compute[249229]: 2026-01-23 10:26:09.426 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:26:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1720359449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:26:09 compute-0 nova_compute[249229]: 2026-01-23 10:26:09.492 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:26:09 compute-0 nova_compute[249229]: 2026-01-23 10:26:09.492 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:26:09 compute-0 nova_compute[249229]: 2026-01-23 10:26:09.593 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:26:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:09] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:26:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:09] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:26:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:26:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80523321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:26:10 compute-0 nova_compute[249229]: 2026-01-23 10:26:10.077 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:26:10 compute-0 nova_compute[249229]: 2026-01-23 10:26:10.083 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:26:10 compute-0 nova_compute[249229]: 2026-01-23 10:26:10.102 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:26:10 compute-0 nova_compute[249229]: 2026-01-23 10:26:10.104 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:26:10 compute-0 nova_compute[249229]: 2026-01-23 10:26:10.104 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:26:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:10.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:10 compute-0 ceph-mon[74335]: pgmap v987: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:26:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/80523321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:26:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1268700137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:26:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2269378824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:26:10 compute-0 sudo[267703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:26:10 compute-0 sudo[267703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:10 compute-0 sudo[267703]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:10 compute-0 sudo[267728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:26:10 compute-0 sudo[267728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.085 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.085 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.085 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.101 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.101 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.101 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.101 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:26:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:11.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:11 compute-0 sudo[267728]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:11 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=infra.usagestats t=2026-01-23T10:26:11.673829496Z level=info msg="Usage stats are ready to report"
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 10:26:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/311252267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:26:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/304457320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:26:11 compute-0 nova_compute[249229]: 2026-01-23 10:26:11.755 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 10:26:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:12.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:12 compute-0 nova_compute[249229]: 2026-01-23 10:26:12.756 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:12 compute-0 ceph-mon[74335]: pgmap v988: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:26:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 23 10:26:13 compute-0 nova_compute[249229]: 2026-01-23 10:26:13.185 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:13.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:26:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:13.696Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:26:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:13.696Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:26:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:13.697Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:26:13 compute-0 nova_compute[249229]: 2026-01-23 10:26:13.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:13 compute-0 nova_compute[249229]: 2026-01-23 10:26:13.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:14.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:26:14 compute-0 ceph-mon[74335]: pgmap v989: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 23 10:26:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 23 10:26:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:15.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:26:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:26:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:26:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:26:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 75 op/s
Jan 23 10:26:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 372 B/s rd, 0 op/s
Jan 23 10:26:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:26:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:26:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:26:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:26:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:26:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:26:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:26:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:26:15 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:15 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1174453001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:26:15 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:26:15 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:26:15 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:15 compute-0 sudo[267788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:26:15 compute-0 sudo[267788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:15 compute-0 sudo[267788]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:15 compute-0 sudo[267813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:26:15 compute-0 sudo[267813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:15 compute-0 nova_compute[249229]: 2026-01-23 10:26:15.708 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:16 compute-0 podman[267882]: 2026-01-23 10:26:16.108573641 +0000 UTC m=+0.042056045 container create 371bda5da7a7b48cc90080ca81533e7290d7973e54d37fa746a9bf15885b8f93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:26:16 compute-0 systemd[1]: Started libpod-conmon-371bda5da7a7b48cc90080ca81533e7290d7973e54d37fa746a9bf15885b8f93.scope.
Jan 23 10:26:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:26:16 compute-0 podman[267882]: 2026-01-23 10:26:16.090974723 +0000 UTC m=+0.024457157 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:26:16 compute-0 podman[267882]: 2026-01-23 10:26:16.201754282 +0000 UTC m=+0.135236716 container init 371bda5da7a7b48cc90080ca81533e7290d7973e54d37fa746a9bf15885b8f93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_pare, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:26:16 compute-0 podman[267882]: 2026-01-23 10:26:16.207945731 +0000 UTC m=+0.141428135 container start 371bda5da7a7b48cc90080ca81533e7290d7973e54d37fa746a9bf15885b8f93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_pare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:26:16 compute-0 podman[267882]: 2026-01-23 10:26:16.210908637 +0000 UTC m=+0.144391061 container attach 371bda5da7a7b48cc90080ca81533e7290d7973e54d37fa746a9bf15885b8f93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_pare, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 10:26:16 compute-0 distracted_pare[267898]: 167 167
Jan 23 10:26:16 compute-0 systemd[1]: libpod-371bda5da7a7b48cc90080ca81533e7290d7973e54d37fa746a9bf15885b8f93.scope: Deactivated successfully.
Jan 23 10:26:16 compute-0 podman[267882]: 2026-01-23 10:26:16.215479079 +0000 UTC m=+0.148961473 container died 371bda5da7a7b48cc90080ca81533e7290d7973e54d37fa746a9bf15885b8f93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_pare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 10:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f2eab2bbc42dc50dfea447c348d861d78e96fb20a260a5472973b0877fcfecc-merged.mount: Deactivated successfully.
Jan 23 10:26:16 compute-0 podman[267882]: 2026-01-23 10:26:16.255574097 +0000 UTC m=+0.189056511 container remove 371bda5da7a7b48cc90080ca81533e7290d7973e54d37fa746a9bf15885b8f93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_pare, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:26:16 compute-0 systemd[1]: libpod-conmon-371bda5da7a7b48cc90080ca81533e7290d7973e54d37fa746a9bf15885b8f93.scope: Deactivated successfully.
Jan 23 10:26:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:16.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:16 compute-0 podman[267921]: 2026-01-23 10:26:16.423249539 +0000 UTC m=+0.047866543 container create d6ef40ee1f51bc52fa95aaddabd8b7e35afef5192808e6aefca38e1dc92f342c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_robinson, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 10:26:16 compute-0 systemd[1]: Started libpod-conmon-d6ef40ee1f51bc52fa95aaddabd8b7e35afef5192808e6aefca38e1dc92f342c.scope.
Jan 23 10:26:16 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:26:16 compute-0 podman[267921]: 2026-01-23 10:26:16.401070948 +0000 UTC m=+0.025688002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf03f4bb5157bc9b6708298fac6d6d6de0bd274eb64d300e02fca09cae25970/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf03f4bb5157bc9b6708298fac6d6d6de0bd274eb64d300e02fca09cae25970/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf03f4bb5157bc9b6708298fac6d6d6de0bd274eb64d300e02fca09cae25970/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf03f4bb5157bc9b6708298fac6d6d6de0bd274eb64d300e02fca09cae25970/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf03f4bb5157bc9b6708298fac6d6d6de0bd274eb64d300e02fca09cae25970/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:16 compute-0 podman[267921]: 2026-01-23 10:26:16.512666642 +0000 UTC m=+0.137283676 container init d6ef40ee1f51bc52fa95aaddabd8b7e35afef5192808e6aefca38e1dc92f342c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_robinson, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:26:16 compute-0 podman[267921]: 2026-01-23 10:26:16.522187196 +0000 UTC m=+0.146804200 container start d6ef40ee1f51bc52fa95aaddabd8b7e35afef5192808e6aefca38e1dc92f342c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_robinson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:26:16 compute-0 podman[267921]: 2026-01-23 10:26:16.525450511 +0000 UTC m=+0.150067545 container attach d6ef40ee1f51bc52fa95aaddabd8b7e35afef5192808e6aefca38e1dc92f342c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 10:26:16 compute-0 ceph-mon[74335]: pgmap v990: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 23 10:26:16 compute-0 ceph-mon[74335]: pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 75 op/s
Jan 23 10:26:16 compute-0 ceph-mon[74335]: pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 372 B/s rd, 0 op/s
Jan 23 10:26:16 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:16 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:26:16 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:26:16 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:26:16 compute-0 heuristic_robinson[267937]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:26:16 compute-0 heuristic_robinson[267937]: --> All data devices are unavailable
Jan 23 10:26:16 compute-0 systemd[1]: libpod-d6ef40ee1f51bc52fa95aaddabd8b7e35afef5192808e6aefca38e1dc92f342c.scope: Deactivated successfully.
Jan 23 10:26:16 compute-0 podman[267921]: 2026-01-23 10:26:16.856715127 +0000 UTC m=+0.481332131 container died d6ef40ee1f51bc52fa95aaddabd8b7e35afef5192808e6aefca38e1dc92f342c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_robinson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 10:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cf03f4bb5157bc9b6708298fac6d6d6de0bd274eb64d300e02fca09cae25970-merged.mount: Deactivated successfully.
Jan 23 10:26:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:16 compute-0 podman[267921]: 2026-01-23 10:26:16.906013321 +0000 UTC m=+0.530630325 container remove d6ef40ee1f51bc52fa95aaddabd8b7e35afef5192808e6aefca38e1dc92f342c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_robinson, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 10:26:16 compute-0 systemd[1]: libpod-conmon-d6ef40ee1f51bc52fa95aaddabd8b7e35afef5192808e6aefca38e1dc92f342c.scope: Deactivated successfully.
Jan 23 10:26:16 compute-0 sudo[267813]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:17 compute-0 sudo[267967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:26:17 compute-0 sudo[267967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:17 compute-0 sudo[267967]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:17 compute-0 sudo[267992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:26:17 compute-0 sudo[267992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:17.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 167 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 562 KiB/s rd, 5.7 MiB/s wr, 131 op/s
Jan 23 10:26:17 compute-0 podman[268059]: 2026-01-23 10:26:17.455285173 +0000 UTC m=+0.040503421 container create 5d4650fe73fedb431e0e3c62a7f0f6a39b0b02f32de389a4480828c6f8fd069f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lichterman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 10:26:17 compute-0 systemd[1]: Started libpod-conmon-5d4650fe73fedb431e0e3c62a7f0f6a39b0b02f32de389a4480828c6f8fd069f.scope.
Jan 23 10:26:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:26:17 compute-0 podman[268059]: 2026-01-23 10:26:17.52892885 +0000 UTC m=+0.114147128 container init 5d4650fe73fedb431e0e3c62a7f0f6a39b0b02f32de389a4480828c6f8fd069f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 10:26:17 compute-0 podman[268059]: 2026-01-23 10:26:17.436222503 +0000 UTC m=+0.021440781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:26:17 compute-0 podman[268059]: 2026-01-23 10:26:17.53518261 +0000 UTC m=+0.120400858 container start 5d4650fe73fedb431e0e3c62a7f0f6a39b0b02f32de389a4480828c6f8fd069f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lichterman, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 23 10:26:17 compute-0 podman[268059]: 2026-01-23 10:26:17.538136386 +0000 UTC m=+0.123354634 container attach 5d4650fe73fedb431e0e3c62a7f0f6a39b0b02f32de389a4480828c6f8fd069f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 10:26:17 compute-0 bold_lichterman[268075]: 167 167
Jan 23 10:26:17 compute-0 systemd[1]: libpod-5d4650fe73fedb431e0e3c62a7f0f6a39b0b02f32de389a4480828c6f8fd069f.scope: Deactivated successfully.
Jan 23 10:26:17 compute-0 podman[268059]: 2026-01-23 10:26:17.541712359 +0000 UTC m=+0.126930607 container died 5d4650fe73fedb431e0e3c62a7f0f6a39b0b02f32de389a4480828c6f8fd069f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a25fb613fbf814f1eec5fee9b11ce8041256d1965e97a5d2088bde39e35216b-merged.mount: Deactivated successfully.
Jan 23 10:26:17 compute-0 podman[268059]: 2026-01-23 10:26:17.576073361 +0000 UTC m=+0.161291609 container remove 5d4650fe73fedb431e0e3c62a7f0f6a39b0b02f32de389a4480828c6f8fd069f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lichterman, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 10:26:17 compute-0 systemd[1]: libpod-conmon-5d4650fe73fedb431e0e3c62a7f0f6a39b0b02f32de389a4480828c6f8fd069f.scope: Deactivated successfully.
Jan 23 10:26:17 compute-0 podman[268096]: 2026-01-23 10:26:17.76926797 +0000 UTC m=+0.051521008 container create 61aaa6121bf52cbef34ef0a0467fdd6744beced5347fefd25f53a422d9e7a617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:26:17 compute-0 systemd[1]: Started libpod-conmon-61aaa6121bf52cbef34ef0a0467fdd6744beced5347fefd25f53a422d9e7a617.scope.
Jan 23 10:26:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:17.801Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e7f5fddddcb049460e43f5b6cad8fd735a72745b2f2dfb7dba91e80b627421/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e7f5fddddcb049460e43f5b6cad8fd735a72745b2f2dfb7dba91e80b627421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e7f5fddddcb049460e43f5b6cad8fd735a72745b2f2dfb7dba91e80b627421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e7f5fddddcb049460e43f5b6cad8fd735a72745b2f2dfb7dba91e80b627421/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:17 compute-0 podman[268096]: 2026-01-23 10:26:17.749838649 +0000 UTC m=+0.032091717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:26:17 compute-0 podman[268096]: 2026-01-23 10:26:17.847635764 +0000 UTC m=+0.129888822 container init 61aaa6121bf52cbef34ef0a0467fdd6744beced5347fefd25f53a422d9e7a617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatelet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:26:17 compute-0 podman[268096]: 2026-01-23 10:26:17.854677757 +0000 UTC m=+0.136930785 container start 61aaa6121bf52cbef34ef0a0467fdd6744beced5347fefd25f53a422d9e7a617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatelet, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:26:17 compute-0 podman[268096]: 2026-01-23 10:26:17.857778407 +0000 UTC m=+0.140031455 container attach 61aaa6121bf52cbef34ef0a0467fdd6744beced5347fefd25f53a422d9e7a617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatelet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]: {
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:     "1": [
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:         {
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "devices": [
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "/dev/loop3"
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             ],
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "lv_name": "ceph_lv0",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "lv_size": "21470642176",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "name": "ceph_lv0",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "tags": {
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.cluster_name": "ceph",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.crush_device_class": "",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.encrypted": "0",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.osd_id": "1",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.type": "block",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.vdo": "0",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:                 "ceph.with_tpm": "0"
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             },
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "type": "block",
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:             "vg_name": "ceph_vg0"
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:         }
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]:     ]
Jan 23 10:26:18 compute-0 hardcore_chatelet[268112]: }
Jan 23 10:26:18 compute-0 systemd[1]: libpod-61aaa6121bf52cbef34ef0a0467fdd6744beced5347fefd25f53a422d9e7a617.scope: Deactivated successfully.
Jan 23 10:26:18 compute-0 podman[268096]: 2026-01-23 10:26:18.158794169 +0000 UTC m=+0.441047207 container died 61aaa6121bf52cbef34ef0a0467fdd6744beced5347fefd25f53a422d9e7a617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9e7f5fddddcb049460e43f5b6cad8fd735a72745b2f2dfb7dba91e80b627421-merged.mount: Deactivated successfully.
Jan 23 10:26:18 compute-0 nova_compute[249229]: 2026-01-23 10:26:18.186 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:18 compute-0 podman[268096]: 2026-01-23 10:26:18.199038982 +0000 UTC m=+0.481292020 container remove 61aaa6121bf52cbef34ef0a0467fdd6744beced5347fefd25f53a422d9e7a617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Jan 23 10:26:18 compute-0 systemd[1]: libpod-conmon-61aaa6121bf52cbef34ef0a0467fdd6744beced5347fefd25f53a422d9e7a617.scope: Deactivated successfully.
Jan 23 10:26:18 compute-0 sudo[267992]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:18 compute-0 sudo[268134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:26:18 compute-0 sudo[268134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:18 compute-0 sudo[268134]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:18.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:18 compute-0 sudo[268159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:26:18 compute-0 sudo[268159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:18 compute-0 ceph-mon[74335]: pgmap v993: 353 pgs: 353 active+clean; 167 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 562 KiB/s rd, 5.7 MiB/s wr, 131 op/s
Jan 23 10:26:18 compute-0 podman[268227]: 2026-01-23 10:26:18.777550918 +0000 UTC m=+0.042939901 container create 55ed2bf71cd142a8d9a0bed8cf02d92e3ccddeae1a31f2b56d635606518826f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 10:26:18 compute-0 systemd[1]: Started libpod-conmon-55ed2bf71cd142a8d9a0bed8cf02d92e3ccddeae1a31f2b56d635606518826f2.scope.
Jan 23 10:26:18 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:26:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:18.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:18 compute-0 podman[268227]: 2026-01-23 10:26:18.850629679 +0000 UTC m=+0.116018692 container init 55ed2bf71cd142a8d9a0bed8cf02d92e3ccddeae1a31f2b56d635606518826f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:26:18 compute-0 podman[268227]: 2026-01-23 10:26:18.757266762 +0000 UTC m=+0.022655775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:26:18 compute-0 podman[268227]: 2026-01-23 10:26:18.858119975 +0000 UTC m=+0.123508958 container start 55ed2bf71cd142a8d9a0bed8cf02d92e3ccddeae1a31f2b56d635606518826f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mendeleev, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:26:18 compute-0 podman[268227]: 2026-01-23 10:26:18.861325397 +0000 UTC m=+0.126714410 container attach 55ed2bf71cd142a8d9a0bed8cf02d92e3ccddeae1a31f2b56d635606518826f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:26:18 compute-0 silly_mendeleev[268243]: 167 167
Jan 23 10:26:18 compute-0 systemd[1]: libpod-55ed2bf71cd142a8d9a0bed8cf02d92e3ccddeae1a31f2b56d635606518826f2.scope: Deactivated successfully.
Jan 23 10:26:18 compute-0 podman[268227]: 2026-01-23 10:26:18.864590612 +0000 UTC m=+0.129979605 container died 55ed2bf71cd142a8d9a0bed8cf02d92e3ccddeae1a31f2b56d635606518826f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb52d6f8f7ddb582805605f319eb2f6c354912e69828817eabc2266bc841834f-merged.mount: Deactivated successfully.
Jan 23 10:26:18 compute-0 podman[268227]: 2026-01-23 10:26:18.898323156 +0000 UTC m=+0.163712139 container remove 55ed2bf71cd142a8d9a0bed8cf02d92e3ccddeae1a31f2b56d635606518826f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mendeleev, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 10:26:18 compute-0 systemd[1]: libpod-conmon-55ed2bf71cd142a8d9a0bed8cf02d92e3ccddeae1a31f2b56d635606518826f2.scope: Deactivated successfully.
Jan 23 10:26:19 compute-0 podman[268270]: 2026-01-23 10:26:19.052722795 +0000 UTC m=+0.038985887 container create d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 10:26:19 compute-0 systemd[1]: Started libpod-conmon-d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada.scope.
Jan 23 10:26:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a931aab0eaec82d380a2a6f583d911dd53f7f3401ca98b7e9ce78bd29cb3decd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a931aab0eaec82d380a2a6f583d911dd53f7f3401ca98b7e9ce78bd29cb3decd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a931aab0eaec82d380a2a6f583d911dd53f7f3401ca98b7e9ce78bd29cb3decd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a931aab0eaec82d380a2a6f583d911dd53f7f3401ca98b7e9ce78bd29cb3decd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:26:19 compute-0 podman[268270]: 2026-01-23 10:26:19.03733649 +0000 UTC m=+0.023599602 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:26:19 compute-0 podman[268270]: 2026-01-23 10:26:19.138627944 +0000 UTC m=+0.124891056 container init d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mclean, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:26:19 compute-0 podman[268270]: 2026-01-23 10:26:19.145321618 +0000 UTC m=+0.131584710 container start d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:26:19 compute-0 podman[268270]: 2026-01-23 10:26:19.149103607 +0000 UTC m=+0.135366769 container attach d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Jan 23 10:26:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:19.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 167 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 562 KiB/s rd, 5.7 MiB/s wr, 131 op/s
Jan 23 10:26:19 compute-0 lvm[268360]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:26:19 compute-0 lvm[268360]: VG ceph_vg0 finished
Jan 23 10:26:19 compute-0 jovial_mclean[268286]: {}
Jan 23 10:26:19 compute-0 systemd[1]: libpod-d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada.scope: Deactivated successfully.
Jan 23 10:26:19 compute-0 podman[268270]: 2026-01-23 10:26:19.850105971 +0000 UTC m=+0.836369073 container died d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 10:26:19 compute-0 systemd[1]: libpod-d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada.scope: Consumed 1.117s CPU time.
Jan 23 10:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a931aab0eaec82d380a2a6f583d911dd53f7f3401ca98b7e9ce78bd29cb3decd-merged.mount: Deactivated successfully.
Jan 23 10:26:19 compute-0 podman[268270]: 2026-01-23 10:26:19.892485975 +0000 UTC m=+0.878749067 container remove d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:26:19 compute-0 systemd[1]: libpod-conmon-d1bbdb0f202f939e81a7134b4f7c5fdf832a45da88ea098f6c958f798721dada.scope: Deactivated successfully.
Jan 23 10:26:19 compute-0 sudo[268159]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:26:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:26:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:19] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:26:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:19] "GET /metrics HTTP/1.1" 200 48558 "" "Prometheus/2.51.0"
Jan 23 10:26:19 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:26:20
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'backups', '.rgw.root', 'volumes', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms']
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:26:20 compute-0 sudo[268374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:26:20 compute-0 sudo[268374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:20 compute-0 sudo[268374]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:26:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:26:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:26:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:20.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001102599282900306 of space, bias 1.0, pg target 0.3307797848700918 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:26:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:26:20 compute-0 nova_compute[249229]: 2026-01-23 10:26:20.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:26:20 compute-0 ceph-mon[74335]: pgmap v994: 353 pgs: 353 active+clean; 167 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 562 KiB/s rd, 5.7 MiB/s wr, 131 op/s
Jan 23 10:26:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:26:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:26:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1766751452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:26:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 567 KiB/s rd, 5.7 MiB/s wr, 135 op/s
Jan 23 10:26:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:21 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1512136197' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:26:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:22 compute-0 ceph-mon[74335]: pgmap v995: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 567 KiB/s rd, 5.7 MiB/s wr, 135 op/s
Jan 23 10:26:23 compute-0 nova_compute[249229]: 2026-01-23 10:26:23.188 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:23.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 567 KiB/s rd, 5.7 MiB/s wr, 135 op/s
Jan 23 10:26:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:23.698Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:26:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:23.698Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:26:24 compute-0 ceph-mon[74335]: pgmap v996: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 567 KiB/s rd, 5.7 MiB/s wr, 135 op/s
Jan 23 10:26:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:24.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:24 compute-0 podman[268403]: 2026-01-23 10:26:24.559929452 +0000 UTC m=+0.081863275 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 10:26:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:25.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 467 KiB/s rd, 4.7 MiB/s wr, 112 op/s
Jan 23 10:26:25 compute-0 sudo[268431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:26:25 compute-0 sudo[268431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:25 compute-0 sudo[268431]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:26 compute-0 ceph-mon[74335]: pgmap v997: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 467 KiB/s rd, 4.7 MiB/s wr, 112 op/s
Jan 23 10:26:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:26.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:27.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Jan 23 10:26:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:27.804Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:28 compute-0 nova_compute[249229]: 2026-01-23 10:26:28.191 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:28.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:28 compute-0 ceph-mon[74335]: pgmap v998: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Jan 23 10:26:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:28.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:29.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 77 op/s
Jan 23 10:26:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:29] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Jan 23 10:26:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:29] "GET /metrics HTTP/1.1" 200 48564 "" "Prometheus/2.51.0"
Jan 23 10:26:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:30.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:30 compute-0 ceph-mon[74335]: pgmap v999: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 77 op/s
Jan 23 10:26:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 77 op/s
Jan 23 10:26:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:32.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:32 compute-0 ceph-mon[74335]: pgmap v1000: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 77 op/s
Jan 23 10:26:33 compute-0 nova_compute[249229]: 2026-01-23 10:26:33.192 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:26:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:33.699Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:34.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:34 compute-0 ceph-mon[74335]: pgmap v1001: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:26:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:26:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:26:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:26:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:26:35 compute-0 podman[268466]: 2026-01-23 10:26:35.547383058 +0000 UTC m=+0.073706069 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 10:26:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:36.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:36 compute-0 ceph-mon[74335]: pgmap v1002: 353 pgs: 353 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:26:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:37.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 188 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Jan 23 10:26:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:37.805Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:38 compute-0 nova_compute[249229]: 2026-01-23 10:26:38.193 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:38.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:38.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:38 compute-0 ceph-mon[74335]: pgmap v1003: 353 pgs: 353 active+clean; 188 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Jan 23 10:26:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 188 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Jan 23 10:26:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:39] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Jan 23 10:26:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:39] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Jan 23 10:26:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:40.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:40 compute-0 ceph-mon[74335]: pgmap v1004: 353 pgs: 353 active+clean; 188 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Jan 23 10:26:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:41.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 10:26:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:42.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:42 compute-0 ceph-mon[74335]: pgmap v1005: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 10:26:43 compute-0 nova_compute[249229]: 2026-01-23 10:26:43.195 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:26:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:26:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:43.700Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:44 compute-0 ceph-mon[74335]: pgmap v1006: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:26:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:26:45 compute-0 sudo[268497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:26:45 compute-0 sudo[268497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:26:45 compute-0 sudo[268497]: pam_unix(sudo:session): session closed for user root
Jan 23 10:26:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:46.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:46 compute-0 ceph-mon[74335]: pgmap v1007: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 10:26:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:47 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:26:47.000 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:26:47 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:26:47.000 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:26:47 compute-0 nova_compute[249229]: 2026-01-23 10:26:47.001 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:47.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 10:26:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:47.807Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:48 compute-0 nova_compute[249229]: 2026-01-23 10:26:48.199 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:48.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:48 compute-0 ceph-mon[74335]: pgmap v1008: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 10:26:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:26:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2133716442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:26:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:26:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2133716442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:26:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:48.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:49.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 107 KiB/s wr, 23 op/s
Jan 23 10:26:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2133716442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:26:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2133716442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:26:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:49] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Jan 23 10:26:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:49] "GET /metrics HTTP/1.1" 200 48559 "" "Prometheus/2.51.0"
Jan 23 10:26:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:26:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:26:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:26:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:26:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:26:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:26:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:50.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:26:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:26:51 compute-0 ceph-mon[74335]: pgmap v1009: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 107 KiB/s wr, 23 op/s
Jan 23 10:26:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:26:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:51.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 112 KiB/s wr, 24 op/s
Jan 23 10:26:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:52 compute-0 ceph-mon[74335]: pgmap v1010: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 112 KiB/s wr, 24 op/s
Jan 23 10:26:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:52.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:53 compute-0 nova_compute[249229]: 2026-01-23 10:26:53.200 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:26:53 compute-0 nova_compute[249229]: 2026-01-23 10:26:53.201 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:53 compute-0 nova_compute[249229]: 2026-01-23 10:26:53.202 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 23 10:26:53 compute-0 nova_compute[249229]: 2026-01-23 10:26:53.202 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:26:53 compute-0 nova_compute[249229]: 2026-01-23 10:26:53.202 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:26:53 compute-0 nova_compute[249229]: 2026-01-23 10:26:53.203 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 17 KiB/s wr, 2 op/s
Jan 23 10:26:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:53.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:53.701Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:54.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:54 compute-0 ceph-mon[74335]: pgmap v1011: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 17 KiB/s wr, 2 op/s
Jan 23 10:26:55 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:26:55.002 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:26:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 17 KiB/s wr, 2 op/s
Jan 23 10:26:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:55.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:55 compute-0 podman[268531]: 2026-01-23 10:26:55.591257667 +0000 UTC m=+0.126269048 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 10:26:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:56.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:26:57 compute-0 ceph-mon[74335]: pgmap v1012: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 17 KiB/s wr, 2 op/s
Jan 23 10:26:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 19 KiB/s wr, 8 op/s
Jan 23 10:26:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:57.808Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:58 compute-0 nova_compute[249229]: 2026-01-23 10:26:58.203 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:26:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:26:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:26:58 compute-0 ceph-mon[74335]: pgmap v1013: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 19 KiB/s wr, 8 op/s
Jan 23 10:26:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:26:58.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:26:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 6.7 KiB/s wr, 8 op/s
Jan 23 10:26:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:26:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:26:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:59.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:26:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:26:59.780 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:26:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:26:59.780 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:26:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:26:59.780 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:26:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:59] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:26:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:26:59] "GET /metrics HTTP/1.1" 200 48557 "" "Prometheus/2.51.0"
Jan 23 10:27:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:27:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:27:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 17 KiB/s wr, 31 op/s
Jan 23 10:27:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:01.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:01 compute-0 ceph-mon[74335]: pgmap v1014: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 6.7 KiB/s wr, 8 op/s
Jan 23 10:27:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:02.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:03 compute-0 ceph-mon[74335]: pgmap v1015: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 17 KiB/s wr, 31 op/s
Jan 23 10:27:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/138703590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:03 compute-0 nova_compute[249229]: 2026-01-23 10:27:03.204 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:27:03 compute-0 nova_compute[249229]: 2026-01-23 10:27:03.206 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:03 compute-0 nova_compute[249229]: 2026-01-23 10:27:03.206 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 23 10:27:03 compute-0 nova_compute[249229]: 2026-01-23 10:27:03.206 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:27:03 compute-0 nova_compute[249229]: 2026-01-23 10:27:03.206 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:27:03 compute-0 nova_compute[249229]: 2026-01-23 10:27:03.207 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 23 10:27:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:03.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:03.702Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:04.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:04 compute-0 ceph-mon[74335]: pgmap v1016: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 23 10:27:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:27:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:27:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 23 10:27:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:05.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:06.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:06 compute-0 podman[268568]: 2026-01-23 10:27:06.547925056 +0000 UTC m=+0.080937878 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 23 10:27:06 compute-0 sudo[268589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:27:06 compute-0 sudo[268589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:06 compute-0 sudo[268589]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:27:06 compute-0 nova_compute[249229]: 2026-01-23 10:27:06.719 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 30 op/s
Jan 23 10:27:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:07.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:07 compute-0 nova_compute[249229]: 2026-01-23 10:27:07.443 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:07.809Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:08 compute-0 nova_compute[249229]: 2026-01-23 10:27:08.207 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:08 compute-0 ceph-mon[74335]: pgmap v1017: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 23 10:27:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:08.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 11 KiB/s wr, 23 op/s
Jan 23 10:27:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:09.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:09 compute-0 nova_compute[249229]: 2026-01-23 10:27:09.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:09 compute-0 nova_compute[249229]: 2026-01-23 10:27:09.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:27:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:09] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Jan 23 10:27:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:09] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Jan 23 10:27:10 compute-0 ceph-mon[74335]: pgmap v1018: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 30 op/s
Jan 23 10:27:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:10.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.729 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.730 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.749 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.750 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.750 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.750 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:27:10 compute-0 nova_compute[249229]: 2026-01-23 10:27:10.751 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:27:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:27:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44430795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.209 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:27:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 11 KiB/s wr, 24 op/s
Jan 23 10:27:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:11.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.364 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.365 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4637MB free_disk=59.94270324707031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.365 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.366 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.484 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.485 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.512 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing inventories for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.595 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating ProviderTree inventory for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.596 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.609 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing aggregate associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.636 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing trait associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 10:27:11 compute-0 nova_compute[249229]: 2026-01-23 10:27:11.656 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:27:11 compute-0 ceph-mon[74335]: pgmap v1019: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 11 KiB/s wr, 23 op/s
Jan 23 10:27:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/44430795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:27:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962373468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:12 compute-0 nova_compute[249229]: 2026-01-23 10:27:12.141 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:27:12 compute-0 nova_compute[249229]: 2026-01-23 10:27:12.149 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:27:12 compute-0 nova_compute[249229]: 2026-01-23 10:27:12.214 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:27:12 compute-0 nova_compute[249229]: 2026-01-23 10:27:12.217 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:27:12 compute-0 nova_compute[249229]: 2026-01-23 10:27:12.217 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:27:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:12.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:13 compute-0 ceph-mon[74335]: pgmap v1020: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 11 KiB/s wr, 24 op/s
Jan 23 10:27:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3962373468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:13 compute-0 nova_compute[249229]: 2026-01-23 10:27:13.205 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:13 compute-0 nova_compute[249229]: 2026-01-23 10:27:13.209 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:27:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:13.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:13.704Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1204745815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1089974770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/506360951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:14.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:14 compute-0 nova_compute[249229]: 2026-01-23 10:27:14.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:15 compute-0 ceph-mon[74335]: pgmap v1021: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:27:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3948964576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:27:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:15.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:15 compute-0 nova_compute[249229]: 2026-01-23 10:27:15.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:15 compute-0 nova_compute[249229]: 2026-01-23 10:27:15.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:16.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:16 compute-0 nova_compute[249229]: 2026-01-23 10:27:16.708 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:27:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 852 B/s wr, 26 op/s
Jan 23 10:27:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:17.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:17 compute-0 ceph-mon[74335]: pgmap v1022: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Jan 23 10:27:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:17.811Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:27:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:17.811Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:27:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:17.811Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:27:18 compute-0 nova_compute[249229]: 2026-01-23 10:27:18.211 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:27:18 compute-0 nova_compute[249229]: 2026-01-23 10:27:18.212 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:18 compute-0 nova_compute[249229]: 2026-01-23 10:27:18.212 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 23 10:27:18 compute-0 nova_compute[249229]: 2026-01-23 10:27:18.212 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:27:18 compute-0 nova_compute[249229]: 2026-01-23 10:27:18.213 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 23 10:27:18 compute-0 nova_compute[249229]: 2026-01-23 10:27:18.214 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:18.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:18 compute-0 ceph-mon[74335]: pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 852 B/s wr, 26 op/s
Jan 23 10:27:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:18.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Jan 23 10:27:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:19.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:19] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Jan 23 10:27:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:19] "GET /metrics HTTP/1.1" 200 48562 "" "Prometheus/2.51.0"
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:27:20
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'images', '.rgw.root', 'default.rgw.log', 'volumes', 'vms', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data']
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:27:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:27:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:27:20 compute-0 sudo[268671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:27:20 compute-0 sudo[268671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:20 compute-0 sudo[268671]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:20 compute-0 sudo[268696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 10:27:20 compute-0 sudo[268696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:20.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.416446897180439e-06 of space, bias 1.0, pg target 0.0007249340691541316 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:27:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:27:20 compute-0 podman[268795]: 2026-01-23 10:27:20.955024617 +0000 UTC m=+0.063390382 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:27:21 compute-0 podman[268795]: 2026-01-23 10:27:21.054790248 +0000 UTC m=+0.163156003 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 10:27:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:27:21 compute-0 ceph-mon[74335]: pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Jan 23 10:27:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:27:21 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2669587480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:21.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:21 compute-0 podman[268910]: 2026-01-23 10:27:21.611923397 +0000 UTC m=+0.062200447 container exec 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:27:21 compute-0 podman[268910]: 2026-01-23 10:27:21.625050946 +0000 UTC m=+0.075328006 container exec_died 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:27:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:22 compute-0 podman[269050]: 2026-01-23 10:27:22.132227653 +0000 UTC m=+0.051205250 container exec 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 10:27:22 compute-0 podman[269050]: 2026-01-23 10:27:22.168789988 +0000 UTC m=+0.087767595 container exec_died 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 10:27:22 compute-0 podman[269114]: 2026-01-23 10:27:22.36621808 +0000 UTC m=+0.049545462 container exec 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, release=1793, com.redhat.component=keepalived-container, name=keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, version=2.2.4, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 10:27:22 compute-0 podman[269114]: 2026-01-23 10:27:22.375043195 +0000 UTC m=+0.058370547 container exec_died 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, com.redhat.component=keepalived-container, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Jan 23 10:27:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:22.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:22 compute-0 podman[269179]: 2026-01-23 10:27:22.570559491 +0000 UTC m=+0.050251722 container exec a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:27:22 compute-0 podman[269179]: 2026-01-23 10:27:22.605935752 +0000 UTC m=+0.085627993 container exec_died a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:27:22 compute-0 podman[269253]: 2026-01-23 10:27:22.788434163 +0000 UTC m=+0.045505455 container exec 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 10:27:22 compute-0 ceph-mon[74335]: pgmap v1025: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:27:23 compute-0 podman[269253]: 2026-01-23 10:27:23.040897873 +0000 UTC m=+0.297969145 container exec_died 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 10:27:23 compute-0 nova_compute[249229]: 2026-01-23 10:27:23.214 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:27:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:23.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:23.704Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:23 compute-0 podman[269362]: 2026-01-23 10:27:23.822775642 +0000 UTC m=+0.485631795 container exec 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:27:24 compute-0 podman[269362]: 2026-01-23 10:27:24.248805426 +0000 UTC m=+0.911661589 container exec_died 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:27:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:24.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:24 compute-0 sudo[268696]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:27:24 compute-0 ceph-mon[74335]: pgmap v1026: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:27:24 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:27:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:27:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:25.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:25 compute-0 sudo[269407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:27:25 compute-0 sudo[269407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:25 compute-0 sudo[269407]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:25 compute-0 sudo[269432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:27:25 compute-0 sudo[269432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:25 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:25 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:25 compute-0 sudo[269432]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:27:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:27:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:27:25 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:27:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Jan 23 10:27:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:27:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:27:26 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 23 10:27:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:27:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:27:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:27:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:27:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:27:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:27:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:26.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:26 compute-0 sudo[269490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:27:26 compute-0 sudo[269490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:26 compute-0 sudo[269490]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:26 compute-0 sudo[269519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:27:26 compute-0 sudo[269519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:26 compute-0 podman[269514]: 2026-01-23 10:27:26.562105059 +0000 UTC m=+0.098711172 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 10:27:26 compute-0 sudo[269565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:27:26 compute-0 sudo[269565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:26 compute-0 sudo[269565]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:26 compute-0 ceph-mon[74335]: pgmap v1027: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:27:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:27:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:27:26 compute-0 ceph-mon[74335]: pgmap v1028: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Jan 23 10:27:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:26 compute-0 ceph-mon[74335]: Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 23 10:27:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:27:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:27:26 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:27:26 compute-0 podman[269632]: 2026-01-23 10:27:26.918226123 +0000 UTC m=+0.038210334 container create 8c90d286497cccd8755c67d0b05abb120223f5ea5f02bf859c30b38dc08397a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 10:27:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:26 compute-0 systemd[1]: Started libpod-conmon-8c90d286497cccd8755c67d0b05abb120223f5ea5f02bf859c30b38dc08397a6.scope.
Jan 23 10:27:26 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:27:26 compute-0 podman[269632]: 2026-01-23 10:27:26.99013161 +0000 UTC m=+0.110115821 container init 8c90d286497cccd8755c67d0b05abb120223f5ea5f02bf859c30b38dc08397a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:27:26 compute-0 podman[269632]: 2026-01-23 10:27:26.996597306 +0000 UTC m=+0.116581517 container start 8c90d286497cccd8755c67d0b05abb120223f5ea5f02bf859c30b38dc08397a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:27:26 compute-0 podman[269632]: 2026-01-23 10:27:26.90185172 +0000 UTC m=+0.021836101 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:27:26 compute-0 podman[269632]: 2026-01-23 10:27:26.999708966 +0000 UTC m=+0.119693207 container attach 8c90d286497cccd8755c67d0b05abb120223f5ea5f02bf859c30b38dc08397a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 23 10:27:27 compute-0 tender_perlman[269648]: 167 167
Jan 23 10:27:27 compute-0 systemd[1]: libpod-8c90d286497cccd8755c67d0b05abb120223f5ea5f02bf859c30b38dc08397a6.scope: Deactivated successfully.
Jan 23 10:27:27 compute-0 podman[269632]: 2026-01-23 10:27:27.001850478 +0000 UTC m=+0.121834689 container died 8c90d286497cccd8755c67d0b05abb120223f5ea5f02bf859c30b38dc08397a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-186e31d8fd059c977bfc3955b2027a602287e0664ac34cb1e67431f873249e43-merged.mount: Deactivated successfully.
Jan 23 10:27:27 compute-0 podman[269632]: 2026-01-23 10:27:27.042589865 +0000 UTC m=+0.162574076 container remove 8c90d286497cccd8755c67d0b05abb120223f5ea5f02bf859c30b38dc08397a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:27:27 compute-0 systemd[1]: libpod-conmon-8c90d286497cccd8755c67d0b05abb120223f5ea5f02bf859c30b38dc08397a6.scope: Deactivated successfully.
Jan 23 10:27:27 compute-0 podman[269672]: 2026-01-23 10:27:27.208040461 +0000 UTC m=+0.047011858 container create 71888d4be13fa3143a1354ee609f73b70aa2e5d83512c54c4d9720a3a4a120d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:27:27 compute-0 systemd[1]: Started libpod-conmon-71888d4be13fa3143a1354ee609f73b70aa2e5d83512c54c4d9720a3a4a120d1.scope.
Jan 23 10:27:27 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f1f2fa31e713c9d8867b0b5060b2494c5e4efa11e587b18abc449980cfe28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f1f2fa31e713c9d8867b0b5060b2494c5e4efa11e587b18abc449980cfe28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:27 compute-0 podman[269672]: 2026-01-23 10:27:27.189487055 +0000 UTC m=+0.028458452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f1f2fa31e713c9d8867b0b5060b2494c5e4efa11e587b18abc449980cfe28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f1f2fa31e713c9d8867b0b5060b2494c5e4efa11e587b18abc449980cfe28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f1f2fa31e713c9d8867b0b5060b2494c5e4efa11e587b18abc449980cfe28/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:27 compute-0 podman[269672]: 2026-01-23 10:27:27.301057037 +0000 UTC m=+0.140028444 container init 71888d4be13fa3143a1354ee609f73b70aa2e5d83512c54c4d9720a3a4a120d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:27:27 compute-0 podman[269672]: 2026-01-23 10:27:27.308087111 +0000 UTC m=+0.147058508 container start 71888d4be13fa3143a1354ee609f73b70aa2e5d83512c54c4d9720a3a4a120d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_faraday, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:27:27 compute-0 podman[269672]: 2026-01-23 10:27:27.311642563 +0000 UTC m=+0.150613980 container attach 71888d4be13fa3143a1354ee609f73b70aa2e5d83512c54c4d9720a3a4a120d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:27:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:27.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:27 compute-0 priceless_faraday[269689]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:27:27 compute-0 priceless_faraday[269689]: --> All data devices are unavailable
Jan 23 10:27:27 compute-0 systemd[1]: libpod-71888d4be13fa3143a1354ee609f73b70aa2e5d83512c54c4d9720a3a4a120d1.scope: Deactivated successfully.
Jan 23 10:27:27 compute-0 podman[269672]: 2026-01-23 10:27:27.639495201 +0000 UTC m=+0.478466648 container died 71888d4be13fa3143a1354ee609f73b70aa2e5d83512c54c4d9720a3a4a120d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_faraday, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 10:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-821f1f2fa31e713c9d8867b0b5060b2494c5e4efa11e587b18abc449980cfe28-merged.mount: Deactivated successfully.
Jan 23 10:27:27 compute-0 podman[269672]: 2026-01-23 10:27:27.68307836 +0000 UTC m=+0.522049757 container remove 71888d4be13fa3143a1354ee609f73b70aa2e5d83512c54c4d9720a3a4a120d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_faraday, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 10:27:27 compute-0 systemd[1]: libpod-conmon-71888d4be13fa3143a1354ee609f73b70aa2e5d83512c54c4d9720a3a4a120d1.scope: Deactivated successfully.
Jan 23 10:27:27 compute-0 sudo[269519]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:27 compute-0 sudo[269715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:27:27 compute-0 sudo[269715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:27 compute-0 sudo[269715]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:27.811Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:27 compute-0 sudo[269740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:27:27 compute-0 sudo[269740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 383 B/s wr, 3 op/s
Jan 23 10:27:28 compute-0 podman[269807]: 2026-01-23 10:27:28.212478678 +0000 UTC m=+0.039154872 container create 962de4466711a794a3f42cfc7d3469f68c4b41bdfa345d44d48f504c5c77fe78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sanderson, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 10:27:28 compute-0 nova_compute[249229]: 2026-01-23 10:27:28.216 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:28 compute-0 systemd[1]: Started libpod-conmon-962de4466711a794a3f42cfc7d3469f68c4b41bdfa345d44d48f504c5c77fe78.scope.
Jan 23 10:27:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:27:28 compute-0 podman[269807]: 2026-01-23 10:27:28.197750653 +0000 UTC m=+0.024426867 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:27:28 compute-0 podman[269807]: 2026-01-23 10:27:28.310828168 +0000 UTC m=+0.137504412 container init 962de4466711a794a3f42cfc7d3469f68c4b41bdfa345d44d48f504c5c77fe78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sanderson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:27:28 compute-0 podman[269807]: 2026-01-23 10:27:28.319571721 +0000 UTC m=+0.146247915 container start 962de4466711a794a3f42cfc7d3469f68c4b41bdfa345d44d48f504c5c77fe78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sanderson, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:27:28 compute-0 wizardly_sanderson[269823]: 167 167
Jan 23 10:27:28 compute-0 systemd[1]: libpod-962de4466711a794a3f42cfc7d3469f68c4b41bdfa345d44d48f504c5c77fe78.scope: Deactivated successfully.
Jan 23 10:27:28 compute-0 podman[269807]: 2026-01-23 10:27:28.35901543 +0000 UTC m=+0.185691624 container attach 962de4466711a794a3f42cfc7d3469f68c4b41bdfa345d44d48f504c5c77fe78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sanderson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:27:28 compute-0 podman[269807]: 2026-01-23 10:27:28.359652668 +0000 UTC m=+0.186328872 container died 962de4466711a794a3f42cfc7d3469f68c4b41bdfa345d44d48f504c5c77fe78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecdb527c6a2bcfdb51877cee8abefafb99f6905367ca6e1fced4d44789784470-merged.mount: Deactivated successfully.
Jan 23 10:27:28 compute-0 podman[269807]: 2026-01-23 10:27:28.400871239 +0000 UTC m=+0.227547433 container remove 962de4466711a794a3f42cfc7d3469f68c4b41bdfa345d44d48f504c5c77fe78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:27:28 compute-0 systemd[1]: libpod-conmon-962de4466711a794a3f42cfc7d3469f68c4b41bdfa345d44d48f504c5c77fe78.scope: Deactivated successfully.
Jan 23 10:27:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:28.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:28 compute-0 podman[269848]: 2026-01-23 10:27:28.608874466 +0000 UTC m=+0.077996824 container create 35b7f9f1ee58612b52690ecf8c85a5bd285ae20b35897ae9815e2c15a62f626e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_joliot, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 10:27:28 compute-0 podman[269848]: 2026-01-23 10:27:28.559105838 +0000 UTC m=+0.028228216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:27:28 compute-0 systemd[1]: Started libpod-conmon-35b7f9f1ee58612b52690ecf8c85a5bd285ae20b35897ae9815e2c15a62f626e.scope.
Jan 23 10:27:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f243537a5b90d7337c80ae16e2fb95c9d18f6f937b7a8bb19f7a9c80a75c6d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f243537a5b90d7337c80ae16e2fb95c9d18f6f937b7a8bb19f7a9c80a75c6d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f243537a5b90d7337c80ae16e2fb95c9d18f6f937b7a8bb19f7a9c80a75c6d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f243537a5b90d7337c80ae16e2fb95c9d18f6f937b7a8bb19f7a9c80a75c6d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:28 compute-0 podman[269848]: 2026-01-23 10:27:28.707857094 +0000 UTC m=+0.176979462 container init 35b7f9f1ee58612b52690ecf8c85a5bd285ae20b35897ae9815e2c15a62f626e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_joliot, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:27:28 compute-0 podman[269848]: 2026-01-23 10:27:28.714336871 +0000 UTC m=+0.183459229 container start 35b7f9f1ee58612b52690ecf8c85a5bd285ae20b35897ae9815e2c15a62f626e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 10:27:28 compute-0 podman[269848]: 2026-01-23 10:27:28.717670237 +0000 UTC m=+0.186792615 container attach 35b7f9f1ee58612b52690ecf8c85a5bd285ae20b35897ae9815e2c15a62f626e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:27:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:28.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:27:28 compute-0 silly_joliot[269864]: {
Jan 23 10:27:28 compute-0 silly_joliot[269864]:     "1": [
Jan 23 10:27:28 compute-0 silly_joliot[269864]:         {
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "devices": [
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "/dev/loop3"
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             ],
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "lv_name": "ceph_lv0",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "lv_size": "21470642176",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "name": "ceph_lv0",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "tags": {
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.cluster_name": "ceph",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.crush_device_class": "",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.encrypted": "0",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.osd_id": "1",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.type": "block",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.vdo": "0",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:                 "ceph.with_tpm": "0"
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             },
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "type": "block",
Jan 23 10:27:28 compute-0 silly_joliot[269864]:             "vg_name": "ceph_vg0"
Jan 23 10:27:28 compute-0 silly_joliot[269864]:         }
Jan 23 10:27:28 compute-0 silly_joliot[269864]:     ]
Jan 23 10:27:28 compute-0 silly_joliot[269864]: }
Jan 23 10:27:29 compute-0 systemd[1]: libpod-35b7f9f1ee58612b52690ecf8c85a5bd285ae20b35897ae9815e2c15a62f626e.scope: Deactivated successfully.
Jan 23 10:27:29 compute-0 podman[269848]: 2026-01-23 10:27:29.007472386 +0000 UTC m=+0.476594764 container died 35b7f9f1ee58612b52690ecf8c85a5bd285ae20b35897ae9815e2c15a62f626e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f243537a5b90d7337c80ae16e2fb95c9d18f6f937b7a8bb19f7a9c80a75c6d2-merged.mount: Deactivated successfully.
Jan 23 10:27:29 compute-0 podman[269848]: 2026-01-23 10:27:29.045388541 +0000 UTC m=+0.514510899 container remove 35b7f9f1ee58612b52690ecf8c85a5bd285ae20b35897ae9815e2c15a62f626e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_joliot, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:27:29 compute-0 systemd[1]: libpod-conmon-35b7f9f1ee58612b52690ecf8c85a5bd285ae20b35897ae9815e2c15a62f626e.scope: Deactivated successfully.
Jan 23 10:27:29 compute-0 sudo[269740]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:29 compute-0 sudo[269883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:27:29 compute-0 sudo[269883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:29 compute-0 sudo[269883]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:29 compute-0 sudo[269908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:27:29 compute-0 sudo[269908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:29.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:29 compute-0 ceph-mon[74335]: pgmap v1029: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 383 B/s wr, 3 op/s
Jan 23 10:27:29 compute-0 podman[269976]: 2026-01-23 10:27:29.55747691 +0000 UTC m=+0.039353048 container create 87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_euclid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:27:29 compute-0 systemd[1]: Started libpod-conmon-87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432.scope.
Jan 23 10:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:27:29 compute-0 podman[269976]: 2026-01-23 10:27:29.619901202 +0000 UTC m=+0.101777360 container init 87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 10:27:29 compute-0 podman[269976]: 2026-01-23 10:27:29.626134532 +0000 UTC m=+0.108010660 container start 87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 10:27:29 compute-0 podman[269976]: 2026-01-23 10:27:29.629100308 +0000 UTC m=+0.110976446 container attach 87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_euclid, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:27:29 compute-0 lucid_euclid[269992]: 167 167
Jan 23 10:27:29 compute-0 systemd[1]: libpod-87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432.scope: Deactivated successfully.
Jan 23 10:27:29 compute-0 conmon[269992]: conmon 87cf4c913351feb4966c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432.scope/container/memory.events
Jan 23 10:27:29 compute-0 podman[269976]: 2026-01-23 10:27:29.633241208 +0000 UTC m=+0.115117346 container died 87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 23 10:27:29 compute-0 podman[269976]: 2026-01-23 10:27:29.540891461 +0000 UTC m=+0.022767629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-19373f4d120c80f6693738eb0561d0b3df68ae294f6a9a47fccc17eeed981580-merged.mount: Deactivated successfully.
Jan 23 10:27:29 compute-0 podman[269976]: 2026-01-23 10:27:29.668573238 +0000 UTC m=+0.150449376 container remove 87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 10:27:29 compute-0 systemd[1]: libpod-conmon-87cf4c913351feb4966c3539a03dc8e118b956a7c0c6994b33d20142cf21d432.scope: Deactivated successfully.
Jan 23 10:27:29 compute-0 podman[270016]: 2026-01-23 10:27:29.811883447 +0000 UTC m=+0.039705898 container create d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_montalcini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:27:29 compute-0 systemd[1]: Started libpod-conmon-d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463.scope.
Jan 23 10:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe670b96c1a239d7bf128c9897bdadaf8ea3203f8cb21f9317e4eed9ef5b910/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe670b96c1a239d7bf128c9897bdadaf8ea3203f8cb21f9317e4eed9ef5b910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe670b96c1a239d7bf128c9897bdadaf8ea3203f8cb21f9317e4eed9ef5b910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe670b96c1a239d7bf128c9897bdadaf8ea3203f8cb21f9317e4eed9ef5b910/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:27:29 compute-0 podman[270016]: 2026-01-23 10:27:29.885647467 +0000 UTC m=+0.113469928 container init d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_montalcini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 10:27:29 compute-0 podman[270016]: 2026-01-23 10:27:29.794932327 +0000 UTC m=+0.022754788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:27:29 compute-0 podman[270016]: 2026-01-23 10:27:29.894567325 +0000 UTC m=+0.122389766 container start d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:27:29 compute-0 podman[270016]: 2026-01-23 10:27:29.897807418 +0000 UTC m=+0.125629889 container attach d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:27:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:29] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:27:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:29] "GET /metrics HTTP/1.1" 200 48555 "" "Prometheus/2.51.0"
Jan 23 10:27:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 383 B/s wr, 3 op/s
Jan 23 10:27:30 compute-0 ceph-mon[74335]: pgmap v1030: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 383 B/s wr, 3 op/s
Jan 23 10:27:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:30.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:30 compute-0 lvm[270108]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:27:30 compute-0 lvm[270108]: VG ceph_vg0 finished
Jan 23 10:27:30 compute-0 dazzling_montalcini[270033]: {}
Jan 23 10:27:30 compute-0 systemd[1]: libpod-d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463.scope: Deactivated successfully.
Jan 23 10:27:30 compute-0 podman[270016]: 2026-01-23 10:27:30.522663693 +0000 UTC m=+0.750486134 container died d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_montalcini, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 10:27:30 compute-0 systemd[1]: libpod-d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463.scope: Consumed 1.011s CPU time.
Jan 23 10:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fe670b96c1a239d7bf128c9897bdadaf8ea3203f8cb21f9317e4eed9ef5b910-merged.mount: Deactivated successfully.
Jan 23 10:27:30 compute-0 podman[270016]: 2026-01-23 10:27:30.567082356 +0000 UTC m=+0.794904797 container remove d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_montalcini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 10:27:30 compute-0 systemd[1]: libpod-conmon-d155a4e5709de856f7565e0aa962177a823600901b9cc1f5fdfb495e69b1e463.scope: Deactivated successfully.
Jan 23 10:27:30 compute-0 sudo[269908]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:27:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:31.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:31 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:27:31 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:31 compute-0 sudo[270125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:27:31 compute-0 sudo[270125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:31 compute-0 sudo[270125]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 0 op/s
Jan 23 10:27:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:27:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:32.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:27:32 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:32 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:27:32 compute-0 ceph-mon[74335]: pgmap v1031: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 0 op/s
Jan 23 10:27:33 compute-0 nova_compute[249229]: 2026-01-23 10:27:33.218 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:33.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:33.706Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 0 op/s
Jan 23 10:27:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:34.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:27:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:27:35 compute-0 ceph-mon[74335]: pgmap v1032: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 0 op/s
Jan 23 10:27:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:27:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:35.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 0 op/s
Jan 23 10:27:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:36.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:37 compute-0 ceph-mon[74335]: pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 0 op/s
Jan 23 10:27:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:37.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:37 compute-0 podman[270156]: 2026-01-23 10:27:37.524168241 +0000 UTC m=+0.051421646 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 10:27:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:37.814Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:27:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:37.814Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:27:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:37.814Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:27:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:27:38 compute-0 nova_compute[249229]: 2026-01-23 10:27:38.221 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:38.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:38 compute-0 ceph-mon[74335]: pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:27:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:38.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:39.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:39] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:27:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:39] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:27:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:40.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:40 compute-0 ceph-mon[74335]: pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:41.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:27:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:27:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6956 writes, 30K keys, 6950 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 6956 writes, 6950 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1551 writes, 6639 keys, 1549 commit groups, 1.0 writes per commit group, ingest: 11.68 MB, 0.02 MB/s
                                           Interval WAL: 1551 writes, 1549 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     48.0      0.94              0.35        17    0.055       0      0       0.0       0.0
                                             L6      1/0   12.03 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.4     70.5     60.5      3.29              0.59        16    0.206     87K   8836       0.0       0.0
                                            Sum      1/0   12.03 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.4     54.9     57.7      4.23              0.95        33    0.128     87K   8836       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.6     65.3     64.7      0.92              0.19         8    0.115     25K   2538       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     70.5     60.5      3.29              0.59        16    0.206     87K   8836       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     48.1      0.93              0.35        16    0.058       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.044, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.24 GB write, 0.10 MB/s write, 0.23 GB read, 0.10 MB/s read, 4.2 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5569ddb77350#2 capacity: 304.00 MB usage: 19.36 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1044,18.68 MB,6.14567%) FilterBlock(34,256.23 KB,0.0823121%) IndexBlock(34,441.47 KB,0.141816%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 10:27:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:42.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:43 compute-0 nova_compute[249229]: 2026-01-23 10:27:43.223 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:27:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:43.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:43.707Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:43 compute-0 ceph-mon[74335]: pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:27:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:44.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:45.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:46 compute-0 ceph-mon[74335]: pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:46 compute-0 sudo[270187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:27:46 compute-0 sudo[270187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:27:46 compute-0 sudo[270187]: pam_unix(sudo:session): session closed for user root
Jan 23 10:27:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:47 compute-0 ceph-mon[74335]: pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:47.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:47 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:27:47.571 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:27:47 compute-0 nova_compute[249229]: 2026-01-23 10:27:47.572 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:47 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:27:47.572 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:27:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:47.815Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:27:48 compute-0 nova_compute[249229]: 2026-01-23 10:27:48.226 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:48.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:27:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2175178822' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:27:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:27:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2175178822' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:27:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:48.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:49 compute-0 ceph-mon[74335]: pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:27:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2175178822' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:27:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2175178822' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:27:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:49.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:49] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:27:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:49] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:27:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:27:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:27:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:27:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:27:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:27:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:27:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:27:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:27:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.261 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "2cb20ba8-5b68-4715-9848-bad345c47a31" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.262 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.289 249233 DEBUG nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.366 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.367 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:27:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:51.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.380 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.381 249233 INFO nova.compute.claims [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Claim successful on node compute-0.ctlplane.example.com
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.486 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:27:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:27:51 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855952180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.957 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:27:51 compute-0 nova_compute[249229]: 2026-01-23 10:27:51.965 249233 DEBUG nova.compute.provider_tree [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:27:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.007 249233 DEBUG nova.scheduler.client.report [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.099 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.099 249233 DEBUG nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.151 249233 DEBUG nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.152 249233 DEBUG nova.network.neutron [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.177 249233 INFO nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 23 10:27:52 compute-0 ceph-mon[74335]: pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:52 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/855952180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.200 249233 DEBUG nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.307 249233 DEBUG nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.309 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.309 249233 INFO nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Creating image(s)
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.336 249233 DEBUG nova.storage.rbd_utils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 2cb20ba8-5b68-4715-9848-bad345c47a31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.361 249233 DEBUG nova.storage.rbd_utils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 2cb20ba8-5b68-4715-9848-bad345c47a31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.386 249233 DEBUG nova.storage.rbd_utils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 2cb20ba8-5b68-4715-9848-bad345c47a31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.389 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.408 249233 DEBUG nova.policy [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f459c4e71e6c47acb0f8aaf83f34695e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.451 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.452 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "379b2821245bc82aa5a95839eddb9a97716b559c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.453 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "379b2821245bc82aa5a95839eddb9a97716b559c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.453 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "379b2821245bc82aa5a95839eddb9a97716b559c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:27:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:52.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.479 249233 DEBUG nova.storage.rbd_utils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 2cb20ba8-5b68-4715-9848-bad345c47a31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:27:52 compute-0 nova_compute[249229]: 2026-01-23 10:27:52.483 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c 2cb20ba8-5b68-4715-9848-bad345c47a31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:27:52 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:27:52.575 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:27:53 compute-0 nova_compute[249229]: 2026-01-23 10:27:53.228 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:27:53 compute-0 nova_compute[249229]: 2026-01-23 10:27:53.358 249233 DEBUG nova.network.neutron [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Successfully created port: 5775c66b-2d08-4c9d-83fe-d4c692e19472 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 23 10:27:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:53.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:53.708Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:27:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:53.708Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:27:53 compute-0 ceph-mon[74335]: pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:27:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:54 compute-0 nova_compute[249229]: 2026-01-23 10:27:54.166 249233 DEBUG nova.network.neutron [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Successfully updated port: 5775c66b-2d08-4c9d-83fe-d4c692e19472 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 23 10:27:54 compute-0 nova_compute[249229]: 2026-01-23 10:27:54.186 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:27:54 compute-0 nova_compute[249229]: 2026-01-23 10:27:54.186 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquired lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:27:54 compute-0 nova_compute[249229]: 2026-01-23 10:27:54.187 249233 DEBUG nova.network.neutron [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 23 10:27:54 compute-0 nova_compute[249229]: 2026-01-23 10:27:54.250 249233 DEBUG nova.compute.manager [req-6d8bc864-d527-47ea-8106-2ae5a22e01e5 req-cb146d69-e911-40b6-8fd4-cde3929eacd8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received event network-changed-5775c66b-2d08-4c9d-83fe-d4c692e19472 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:27:54 compute-0 nova_compute[249229]: 2026-01-23 10:27:54.251 249233 DEBUG nova.compute.manager [req-6d8bc864-d527-47ea-8106-2ae5a22e01e5 req-cb146d69-e911-40b6-8fd4-cde3929eacd8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Refreshing instance network info cache due to event network-changed-5775c66b-2d08-4c9d-83fe-d4c692e19472. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:27:54 compute-0 nova_compute[249229]: 2026-01-23 10:27:54.251 249233 DEBUG oslo_concurrency.lockutils [req-6d8bc864-d527-47ea-8106-2ae5a22e01e5 req-cb146d69-e911-40b6-8fd4-cde3929eacd8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:27:54 compute-0 nova_compute[249229]: 2026-01-23 10:27:54.294 249233 DEBUG nova.network.neutron [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 23 10:27:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:54.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:55 compute-0 nova_compute[249229]: 2026-01-23 10:27:55.057 249233 DEBUG nova.network.neutron [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updating instance_info_cache with network_info: [{"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:27:55 compute-0 nova_compute[249229]: 2026-01-23 10:27:55.074 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Releasing lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:27:55 compute-0 nova_compute[249229]: 2026-01-23 10:27:55.075 249233 DEBUG nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Instance network_info: |[{"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 23 10:27:55 compute-0 nova_compute[249229]: 2026-01-23 10:27:55.075 249233 DEBUG oslo_concurrency.lockutils [req-6d8bc864-d527-47ea-8106-2ae5a22e01e5 req-cb146d69-e911-40b6-8fd4-cde3929eacd8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:27:55 compute-0 nova_compute[249229]: 2026-01-23 10:27:55.075 249233 DEBUG nova.network.neutron [req-6d8bc864-d527-47ea-8106-2ae5a22e01e5 req-cb146d69-e911-40b6-8fd4-cde3929eacd8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Refreshing network info cache for port 5775c66b-2d08-4c9d-83fe-d4c692e19472 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:27:55 compute-0 ceph-mon[74335]: pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:27:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:55.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 54 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 822 KiB/s wr, 1 op/s
Jan 23 10:27:56 compute-0 nova_compute[249229]: 2026-01-23 10:27:56.192 249233 DEBUG nova.network.neutron [req-6d8bc864-d527-47ea-8106-2ae5a22e01e5 req-cb146d69-e911-40b6-8fd4-cde3929eacd8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updated VIF entry in instance network info cache for port 5775c66b-2d08-4c9d-83fe-d4c692e19472. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:27:56 compute-0 nova_compute[249229]: 2026-01-23 10:27:56.193 249233 DEBUG nova.network.neutron [req-6d8bc864-d527-47ea-8106-2ae5a22e01e5 req-cb146d69-e911-40b6-8fd4-cde3929eacd8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updating instance_info_cache with network_info: [{"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:27:56 compute-0 nova_compute[249229]: 2026-01-23 10:27:56.206 249233 DEBUG oslo_concurrency.lockutils [req-6d8bc864-d527-47ea-8106-2ae5a22e01e5 req-cb146d69-e911-40b6-8fd4-cde3929eacd8 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:27:56 compute-0 nova_compute[249229]: 2026-01-23 10:27:56.422 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/379b2821245bc82aa5a95839eddb9a97716b559c 2cb20ba8-5b68-4715-9848-bad345c47a31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.940s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:27:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:27:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:56.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:27:56 compute-0 nova_compute[249229]: 2026-01-23 10:27:56.506 249233 DEBUG nova.storage.rbd_utils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] resizing rbd image 2cb20ba8-5b68-4715-9848-bad345c47a31_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 23 10:27:56 compute-0 ceph-mon[74335]: pgmap v1043: 353 pgs: 353 active+clean; 54 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 822 KiB/s wr, 1 op/s
Jan 23 10:27:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:27:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:57.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:57 compute-0 podman[270393]: 2026-01-23 10:27:57.56425647 +0000 UTC m=+0.098148835 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Jan 23 10:27:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:57.816Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 84 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.230 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.232 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:58.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.732 249233 DEBUG nova.objects.instance [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'migration_context' on Instance uuid 2cb20ba8-5b68-4715-9848-bad345c47a31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.753 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.753 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Ensure instance console log exists: /var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.754 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.754 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.755 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.757 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Start _get_guest_xml network_info=[{"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T10:15:36Z,direct_url=<?>,disk_format='qcow2',id=271ec98e-d058-421b-bbfb-4b4a5954c90a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5220cd4f58cb43bb899e367e961bc5c1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T10:15:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'size': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '271ec98e-d058-421b-bbfb-4b4a5954c90a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.762 249233 WARNING nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.766 249233 DEBUG nova.virt.libvirt.host [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.767 249233 DEBUG nova.virt.libvirt.host [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.770 249233 DEBUG nova.virt.libvirt.host [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.770 249233 DEBUG nova.virt.libvirt.host [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.771 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.771 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T10:15:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1d8c8bf4-786e-4009-bc53-f259480fb5b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T10:15:36Z,direct_url=<?>,disk_format='qcow2',id=271ec98e-d058-421b-bbfb-4b4a5954c90a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5220cd4f58cb43bb899e367e961bc5c1',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T10:15:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.772 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.772 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.772 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.772 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.773 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.773 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.773 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.774 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.774 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.774 249233 DEBUG nova.virt.hardware [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 23 10:27:58 compute-0 nova_compute[249229]: 2026-01-23 10:27:58.778 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:27:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:58.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:27:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:27:58.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:27:58 compute-0 ceph-mon[74335]: pgmap v1044: 353 pgs: 353 active+clean; 84 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Jan 23 10:27:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 10:27:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2299921761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.258 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.293 249233 DEBUG nova.storage.rbd_utils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 2cb20ba8-5b68-4715-9848-bad345c47a31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.298 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:27:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:27:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:27:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:59.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:27:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 23 10:27:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3503106741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.763 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.765 249233 DEBUG nova.virt.libvirt.vif [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:27:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-823186335',display_name='tempest-TestNetworkBasicOps-server-823186335',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-823186335',id=13,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALoQy92z4DUQMTV+DqFHSml8UJUaVcff/mSaypHwFMH7pDs09vJt3HuFEDGESi4DTro4DoZamY+RqX7NCM6Mkdp29d9ri0gEUF5j3pATC3bt0D18Sus1fVbyPJqdBijKQ==',key_name='tempest-TestNetworkBasicOps-1062461069',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-2q6ugufv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:27:52Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=2cb20ba8-5b68-4715-9848-bad345c47a31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.765 249233 DEBUG nova.network.os_vif_util [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.766 249233 DEBUG nova.network.os_vif_util [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:d3:68,bridge_name='br-int',has_traffic_filtering=True,id=5775c66b-2d08-4c9d-83fe-d4c692e19472,network=Network(64d8458c-fab0-469a-aa4f-0a8a3ecc755f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5775c66b-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.767 249233 DEBUG nova.objects.instance [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2cb20ba8-5b68-4715-9848-bad345c47a31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:27:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:27:59.781 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:27:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:27:59.782 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:27:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:27:59.782 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.786 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] End _get_guest_xml xml=<domain type="kvm">
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <uuid>2cb20ba8-5b68-4715-9848-bad345c47a31</uuid>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <name>instance-0000000d</name>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <memory>131072</memory>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <vcpu>1</vcpu>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <metadata>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <nova:name>tempest-TestNetworkBasicOps-server-823186335</nova:name>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <nova:creationTime>2026-01-23 10:27:58</nova:creationTime>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <nova:flavor name="m1.nano">
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <nova:memory>128</nova:memory>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <nova:disk>1</nova:disk>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <nova:swap>0</nova:swap>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <nova:ephemeral>0</nova:ephemeral>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <nova:vcpus>1</nova:vcpus>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       </nova:flavor>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <nova:owner>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <nova:user uuid="f459c4e71e6c47acb0f8aaf83f34695e">tempest-TestNetworkBasicOps-655467240-project-member</nova:user>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <nova:project uuid="acc90003f0f7412b8daf8a1b6f0f1494">tempest-TestNetworkBasicOps-655467240</nova:project>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       </nova:owner>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <nova:root type="image" uuid="271ec98e-d058-421b-bbfb-4b4a5954c90a"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <nova:ports>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <nova:port uuid="5775c66b-2d08-4c9d-83fe-d4c692e19472">
Jan 23 10:27:59 compute-0 nova_compute[249229]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         </nova:port>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       </nova:ports>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     </nova:instance>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   </metadata>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <sysinfo type="smbios">
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <system>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <entry name="manufacturer">RDO</entry>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <entry name="product">OpenStack Compute</entry>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <entry name="serial">2cb20ba8-5b68-4715-9848-bad345c47a31</entry>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <entry name="uuid">2cb20ba8-5b68-4715-9848-bad345c47a31</entry>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <entry name="family">Virtual Machine</entry>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     </system>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   </sysinfo>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <os>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <boot dev="hd"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <smbios mode="sysinfo"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   </os>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <features>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <acpi/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <apic/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <vmcoreinfo/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   </features>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <clock offset="utc">
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <timer name="pit" tickpolicy="delay"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <timer name="hpet" present="no"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   </clock>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <cpu mode="host-model" match="exact">
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <topology sockets="1" cores="1" threads="1"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   </cpu>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   <devices>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <disk type="network" device="disk">
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <driver type="raw" cache="none"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <source protocol="rbd" name="vms/2cb20ba8-5b68-4715-9848-bad345c47a31_disk">
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <host name="192.168.122.100" port="6789"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <host name="192.168.122.102" port="6789"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <host name="192.168.122.101" port="6789"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       </source>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <auth username="openstack">
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <secret type="ceph" uuid="f3005f84-239a-55b6-a948-8f1fb592b920"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       </auth>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <target dev="vda" bus="virtio"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <disk type="network" device="cdrom">
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <driver type="raw" cache="none"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <source protocol="rbd" name="vms/2cb20ba8-5b68-4715-9848-bad345c47a31_disk.config">
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <host name="192.168.122.100" port="6789"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <host name="192.168.122.102" port="6789"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <host name="192.168.122.101" port="6789"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       </source>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <auth username="openstack">
Jan 23 10:27:59 compute-0 nova_compute[249229]:         <secret type="ceph" uuid="f3005f84-239a-55b6-a948-8f1fb592b920"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       </auth>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <target dev="sda" bus="sata"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     </disk>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <interface type="ethernet">
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <mac address="fa:16:3e:07:d3:68"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <model type="virtio"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <driver name="vhost" rx_queue_size="512"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <mtu size="1442"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <target dev="tap5775c66b-2d"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     </interface>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <serial type="pty">
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <log file="/var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31/console.log" append="off"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     </serial>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <video>
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <model type="virtio"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     </video>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <input type="tablet" bus="usb"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <rng model="virtio">
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <backend model="random">/dev/urandom</backend>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     </rng>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="pci" model="pcie-root-port"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <controller type="usb" index="0"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     <memballoon model="virtio">
Jan 23 10:27:59 compute-0 nova_compute[249229]:       <stats period="10"/>
Jan 23 10:27:59 compute-0 nova_compute[249229]:     </memballoon>
Jan 23 10:27:59 compute-0 nova_compute[249229]:   </devices>
Jan 23 10:27:59 compute-0 nova_compute[249229]: </domain>
Jan 23 10:27:59 compute-0 nova_compute[249229]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.787 249233 DEBUG nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Preparing to wait for external event network-vif-plugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.787 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.787 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.787 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.788 249233 DEBUG nova.virt.libvirt.vif [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:27:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-823186335',display_name='tempest-TestNetworkBasicOps-server-823186335',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-823186335',id=13,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALoQy92z4DUQMTV+DqFHSml8UJUaVcff/mSaypHwFMH7pDs09vJt3HuFEDGESi4DTro4DoZamY+RqX7NCM6Mkdp29d9ri0gEUF5j3pATC3bt0D18Sus1fVbyPJqdBijKQ==',key_name='tempest-TestNetworkBasicOps-1062461069',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-2q6ugufv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:27:52Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=2cb20ba8-5b68-4715-9848-bad345c47a31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.788 249233 DEBUG nova.network.os_vif_util [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.789 249233 DEBUG nova.network.os_vif_util [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:d3:68,bridge_name='br-int',has_traffic_filtering=True,id=5775c66b-2d08-4c9d-83fe-d4c692e19472,network=Network(64d8458c-fab0-469a-aa4f-0a8a3ecc755f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5775c66b-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.789 249233 DEBUG os_vif [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:d3:68,bridge_name='br-int',has_traffic_filtering=True,id=5775c66b-2d08-4c9d-83fe-d4c692e19472,network=Network(64d8458c-fab0-469a-aa4f-0a8a3ecc755f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5775c66b-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.789 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.790 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.790 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.793 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.794 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5775c66b-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.794 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5775c66b-2d, col_values=(('external_ids', {'iface-id': '5775c66b-2d08-4c9d-83fe-d4c692e19472', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:d3:68', 'vm-uuid': '2cb20ba8-5b68-4715-9848-bad345c47a31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.795 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:59 compute-0 NetworkManager[48866]: <info>  [1769164079.7966] manager: (tap5775c66b-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.798 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.802 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.803 249233 INFO os_vif [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:d3:68,bridge_name='br-int',has_traffic_filtering=True,id=5775c66b-2d08-4c9d-83fe-d4c692e19472,network=Network(64d8458c-fab0-469a-aa4f-0a8a3ecc755f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5775c66b-2d')
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.851 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.851 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.852 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] No VIF found with MAC fa:16:3e:07:d3:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.852 249233 INFO nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Using config drive
Jan 23 10:27:59 compute-0 nova_compute[249229]: 2026-01-23 10:27:59.872 249233 DEBUG nova.storage.rbd_utils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 2cb20ba8-5b68-4715-9848-bad345c47a31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:27:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:59] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:27:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:27:59] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:27:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 84 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Jan 23 10:28:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:28:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:28:00 compute-0 nova_compute[249229]: 2026-01-23 10:28:00.629 249233 INFO nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Creating config drive at /var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31/disk.config
Jan 23 10:28:00 compute-0 nova_compute[249229]: 2026-01-23 10:28:00.635 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2arbebi_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:28:00 compute-0 nova_compute[249229]: 2026-01-23 10:28:00.758 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2arbebi_" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:28:00 compute-0 nova_compute[249229]: 2026-01-23 10:28:00.794 249233 DEBUG nova.storage.rbd_utils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] rbd image 2cb20ba8-5b68-4715-9848-bad345c47a31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 23 10:28:00 compute-0 nova_compute[249229]: 2026-01-23 10:28:00.799 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31/disk.config 2cb20ba8-5b68-4715-9848-bad345c47a31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:28:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:28:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:02.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:03 compute-0 nova_compute[249229]: 2026-01-23 10:28:03.232 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:03.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:03.709Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:28:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2299921761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:28:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3503106741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 10:28:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:04.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:04 compute-0 nova_compute[249229]: 2026-01-23 10:28:04.803 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:04 compute-0 nova_compute[249229]: 2026-01-23 10:28:04.962 249233 DEBUG oslo_concurrency.processutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31/disk.config 2cb20ba8-5b68-4715-9848-bad345c47a31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:28:04 compute-0 nova_compute[249229]: 2026-01-23 10:28:04.963 249233 INFO nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Deleting local config drive /var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31/disk.config because it was imported into RBD.
Jan 23 10:28:04 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 23 10:28:05 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 23 10:28:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:28:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:28:05 compute-0 kernel: tap5775c66b-2d: entered promiscuous mode
Jan 23 10:28:05 compute-0 nova_compute[249229]: 2026-01-23 10:28:05.092 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:05 compute-0 NetworkManager[48866]: <info>  [1769164085.0956] manager: (tap5775c66b-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Jan 23 10:28:05 compute-0 ovn_controller[151634]: 2026-01-23T10:28:05Z|00061|binding|INFO|Claiming lport 5775c66b-2d08-4c9d-83fe-d4c692e19472 for this chassis.
Jan 23 10:28:05 compute-0 ovn_controller[151634]: 2026-01-23T10:28:05Z|00062|binding|INFO|5775c66b-2d08-4c9d-83fe-d4c692e19472: Claiming fa:16:3e:07:d3:68 10.100.0.3
Jan 23 10:28:05 compute-0 nova_compute[249229]: 2026-01-23 10:28:05.098 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:05 compute-0 nova_compute[249229]: 2026-01-23 10:28:05.103 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.111 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:d3:68 10.100.0.3'], port_security=['fa:16:3e:07:d3:68 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2cb20ba8-5b68-4715-9848-bad345c47a31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64d8458c-fab0-469a-aa4f-0a8a3ecc755f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77c410d2-c19a-410d-827c-0cf5352f9f39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51885346-a2e1-48ee-accb-48f791330df1, chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], logical_port=5775c66b-2d08-4c9d-83fe-d4c692e19472) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.113 161921 INFO neutron.agent.ovn.metadata.agent [-] Port 5775c66b-2d08-4c9d-83fe-d4c692e19472 in datapath 64d8458c-fab0-469a-aa4f-0a8a3ecc755f bound to our chassis
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.113 161921 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64d8458c-fab0-469a-aa4f-0a8a3ecc755f
Jan 23 10:28:05 compute-0 systemd-machined[216411]: New machine qemu-4-instance-0000000d.
Jan 23 10:28:05 compute-0 systemd-udevd[270601]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.131 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[b51ea679-3270-44d5-91c2-f675731a9ddc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.132 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64d8458c-f1 in ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.134 255218 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64d8458c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.135 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[be80870b-0a46-4e1c-afd9-8aa04a5cf172]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.135 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[04a3c98e-5fd0-4a39-a02b-638658148518]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 NetworkManager[48866]: <info>  [1769164085.1511] device (tap5775c66b-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 10:28:05 compute-0 NetworkManager[48866]: <info>  [1769164085.1521] device (tap5775c66b-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.151 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[a045df9b-8548-4479-a3a2-2afa3bdc4ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-0000000d.
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.176 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[b52d7e06-2f12-4a09-b8ee-282e74568c92]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_controller[151634]: 2026-01-23T10:28:05Z|00063|binding|INFO|Setting lport 5775c66b-2d08-4c9d-83fe-d4c692e19472 ovn-installed in OVS
Jan 23 10:28:05 compute-0 ovn_controller[151634]: 2026-01-23T10:28:05Z|00064|binding|INFO|Setting lport 5775c66b-2d08-4c9d-83fe-d4c692e19472 up in Southbound
Jan 23 10:28:05 compute-0 nova_compute[249229]: 2026-01-23 10:28:05.187 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.208 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[1548e96f-e0da-4cd8-8c0d-2e8252ea3cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 systemd-udevd[270604]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.214 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[eeddd082-4d6c-4ed3-bbae-472ea238bddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 NetworkManager[48866]: <info>  [1769164085.2148] manager: (tap64d8458c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.248 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[33c896f5-9c82-4244-aa98-2635e1034a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.252 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[ea868080-8b5c-4a63-88b3-4f1bf0fef8ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 NetworkManager[48866]: <info>  [1769164085.2712] device (tap64d8458c-f0): carrier: link connected
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.276 255276 DEBUG oslo.privsep.daemon [-] privsep: reply[de6a6a46-ea41-407f-8205-ea0c392e71af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.305 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[81201a13-355e-4e7d-9cef-7587a6ad8c63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64d8458c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:e0:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520680, 'reachable_time': 21843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270633, 'error': None, 'target': 'ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.322 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[6caf319c-111e-49f9-bfd5-72fcaf9f2b47]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:e086'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520680, 'tstamp': 520680}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270634, 'error': None, 'target': 'ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.342 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[a589a50d-544a-41c7-892d-4730580ce6fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64d8458c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:e0:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520680, 'reachable_time': 21843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270635, 'error': None, 'target': 'ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.377 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[b6127032-05a0-459d-9d09-c6ab402541f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:05.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.438 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0dca0a-f5e6-4b0e-b9f6-392df3e9ae5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.439 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64d8458c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.440 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.440 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64d8458c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:28:05 compute-0 nova_compute[249229]: 2026-01-23 10:28:05.442 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:05 compute-0 NetworkManager[48866]: <info>  [1769164085.4425] manager: (tap64d8458c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 23 10:28:05 compute-0 kernel: tap64d8458c-f0: entered promiscuous mode
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.444 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64d8458c-f0, col_values=(('external_ids', {'iface-id': '5cbba0a0-5f58-4d90-8d1c-814aceb1262d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:28:05 compute-0 ovn_controller[151634]: 2026-01-23T10:28:05Z|00065|binding|INFO|Releasing lport 5cbba0a0-5f58-4d90-8d1c-814aceb1262d from this chassis (sb_readonly=0)
Jan 23 10:28:05 compute-0 nova_compute[249229]: 2026-01-23 10:28:05.463 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.465 161921 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64d8458c-fab0-469a-aa4f-0a8a3ecc755f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64d8458c-fab0-469a-aa4f-0a8a3ecc755f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.465 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[f61c9578-4664-46dd-bca2-765a11e86a8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.466 161921 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: global
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     log         /dev/log local0 debug
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     log-tag     haproxy-metadata-proxy-64d8458c-fab0-469a-aa4f-0a8a3ecc755f
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     user        root
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     group       root
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     maxconn     1024
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     pidfile     /var/lib/neutron/external/pids/64d8458c-fab0-469a-aa4f-0a8a3ecc755f.pid.haproxy
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     daemon
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: defaults
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     log global
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     mode http
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     option httplog
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     option dontlognull
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     option http-server-close
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     option forwardfor
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     retries                 3
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     timeout http-request    30s
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     timeout connect         30s
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     timeout client          32s
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     timeout server          32s
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     timeout http-keep-alive 30s
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: listen listener
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     bind 169.254.169.254:80
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     server metadata /var/lib/neutron/metadata_proxy
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:     http-request add-header X-OVN-Network-ID 64d8458c-fab0-469a-aa4f-0a8a3ecc755f
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 23 10:28:05 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:05.467 161921 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f', 'env', 'PROCESS_TAG=haproxy-64d8458c-fab0-469a-aa4f-0a8a3ecc755f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64d8458c-fab0-469a-aa4f-0a8a3ecc755f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 23 10:28:05 compute-0 podman[270667]: 2026-01-23 10:28:05.870024036 +0000 UTC m=+0.050471908 container create eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 10:28:05 compute-0 systemd[1]: Started libpod-conmon-eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e.scope.
Jan 23 10:28:05 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474a621dad5e73f890f15095c0cd73edf2db8f99eacd8cb0d36787bca84e72f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:05 compute-0 podman[270667]: 2026-01-23 10:28:05.845458827 +0000 UTC m=+0.025906729 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 23 10:28:05 compute-0 podman[270667]: 2026-01-23 10:28:05.943275242 +0000 UTC m=+0.123723134 container init eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:28:05 compute-0 podman[270667]: 2026-01-23 10:28:05.950866991 +0000 UTC m=+0.131314863 container start eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 10:28:05 compute-0 neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f[270683]: [NOTICE]   (270687) : New worker (270689) forked
Jan 23 10:28:05 compute-0 neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f[270683]: [NOTICE]   (270687) : Loading success.
Jan 23 10:28:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 23 10:28:06 compute-0 ceph-mon[74335]: pgmap v1045: 353 pgs: 353 active+clean; 84 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Jan 23 10:28:06 compute-0 ceph-mon[74335]: pgmap v1046: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:28:06 compute-0 ceph-mon[74335]: pgmap v1047: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 10:28:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:28:06 compute-0 nova_compute[249229]: 2026-01-23 10:28:06.137 249233 DEBUG nova.compute.manager [req-11053d2f-79ef-47e3-9aa2-9258b2f3ce87 req-68f722d0-444e-49a2-9a42-e41174b9a140 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received event network-vif-plugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:28:06 compute-0 nova_compute[249229]: 2026-01-23 10:28:06.137 249233 DEBUG oslo_concurrency.lockutils [req-11053d2f-79ef-47e3-9aa2-9258b2f3ce87 req-68f722d0-444e-49a2-9a42-e41174b9a140 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:06 compute-0 nova_compute[249229]: 2026-01-23 10:28:06.138 249233 DEBUG oslo_concurrency.lockutils [req-11053d2f-79ef-47e3-9aa2-9258b2f3ce87 req-68f722d0-444e-49a2-9a42-e41174b9a140 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:06 compute-0 nova_compute[249229]: 2026-01-23 10:28:06.138 249233 DEBUG oslo_concurrency.lockutils [req-11053d2f-79ef-47e3-9aa2-9258b2f3ce87 req-68f722d0-444e-49a2-9a42-e41174b9a140 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:06 compute-0 nova_compute[249229]: 2026-01-23 10:28:06.138 249233 DEBUG nova.compute.manager [req-11053d2f-79ef-47e3-9aa2-9258b2f3ce87 req-68f722d0-444e-49a2-9a42-e41174b9a140 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Processing event network-vif-plugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 23 10:28:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:06.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:06 compute-0 sudo[270735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:28:06 compute-0 sudo[270735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:06 compute-0 sudo[270735]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:07.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.574 249233 DEBUG nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.576 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769164087.5739455, 2cb20ba8-5b68-4715-9848-bad345c47a31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.577 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] VM Started (Lifecycle Event)
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.579 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.582 249233 INFO nova.virt.libvirt.driver [-] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Instance spawned successfully.
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.583 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.612 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.616 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.617 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.617 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.618 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.618 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.619 249233 DEBUG nova.virt.libvirt.driver [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.623 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.659 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.660 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769164087.575392, 2cb20ba8-5b68-4715-9848-bad345c47a31 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.660 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] VM Paused (Lifecycle Event)
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.689 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.691 249233 DEBUG nova.virt.driver [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] Emitting event <LifecycleEvent: 1769164087.5786817, 2cb20ba8-5b68-4715-9848-bad345c47a31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.692 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] VM Resumed (Lifecycle Event)
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.714 249233 INFO nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Took 15.41 seconds to spawn the instance on the hypervisor.
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.715 249233 DEBUG nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.717 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.723 249233 DEBUG nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.757 249233 INFO nova.compute.manager [None req-c430b46b-8f25-419d-b658-4f3263326c9f - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 23 10:28:07 compute-0 nova_compute[249229]: 2026-01-23 10:28:07.797 249233 INFO nova.compute.manager [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Took 16.46 seconds to build instance.
Jan 23 10:28:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:07.818Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1005 KiB/s wr, 31 op/s
Jan 23 10:28:08 compute-0 nova_compute[249229]: 2026-01-23 10:28:08.071 249233 DEBUG oslo_concurrency.lockutils [None req-a8316d1d-01d7-4e82-8b87-ac8ebe0287a9 f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:08 compute-0 ceph-mon[74335]: pgmap v1048: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 23 10:28:08 compute-0 nova_compute[249229]: 2026-01-23 10:28:08.217 249233 DEBUG nova.compute.manager [req-0312f3fc-8d6e-471b-905f-a487c626bc9a req-d188b88b-6e56-454b-a8dd-8df589cea024 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received event network-vif-plugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:28:08 compute-0 nova_compute[249229]: 2026-01-23 10:28:08.218 249233 DEBUG oslo_concurrency.lockutils [req-0312f3fc-8d6e-471b-905f-a487c626bc9a req-d188b88b-6e56-454b-a8dd-8df589cea024 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:08 compute-0 nova_compute[249229]: 2026-01-23 10:28:08.218 249233 DEBUG oslo_concurrency.lockutils [req-0312f3fc-8d6e-471b-905f-a487c626bc9a req-d188b88b-6e56-454b-a8dd-8df589cea024 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:08 compute-0 nova_compute[249229]: 2026-01-23 10:28:08.218 249233 DEBUG oslo_concurrency.lockutils [req-0312f3fc-8d6e-471b-905f-a487c626bc9a req-d188b88b-6e56-454b-a8dd-8df589cea024 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:08 compute-0 nova_compute[249229]: 2026-01-23 10:28:08.219 249233 DEBUG nova.compute.manager [req-0312f3fc-8d6e-471b-905f-a487c626bc9a req-d188b88b-6e56-454b-a8dd-8df589cea024 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] No waiting events found dispatching network-vif-plugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:28:08 compute-0 nova_compute[249229]: 2026-01-23 10:28:08.219 249233 WARNING nova.compute.manager [req-0312f3fc-8d6e-471b-905f-a487c626bc9a req-d188b88b-6e56-454b-a8dd-8df589cea024 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received unexpected event network-vif-plugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 for instance with vm_state active and task_state None.
Jan 23 10:28:08 compute-0 nova_compute[249229]: 2026-01-23 10:28:08.234 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:08 compute-0 podman[270767]: 2026-01-23 10:28:08.569719258 +0000 UTC m=+0.085042297 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 23 10:28:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:08.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:09 compute-0 ceph-mon[74335]: pgmap v1049: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1005 KiB/s wr, 31 op/s
Jan 23 10:28:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:09.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:09 compute-0 nova_compute[249229]: 2026-01-23 10:28:09.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:28:09 compute-0 nova_compute[249229]: 2026-01-23 10:28:09.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:28:09 compute-0 nova_compute[249229]: 2026-01-23 10:28:09.806 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:09] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Jan 23 10:28:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:09] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Jan 23 10:28:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 55 KiB/s wr, 16 op/s
Jan 23 10:28:10 compute-0 ceph-mon[74335]: pgmap v1050: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 55 KiB/s wr, 16 op/s
Jan 23 10:28:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:10.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:10 compute-0 nova_compute[249229]: 2026-01-23 10:28:10.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:28:10 compute-0 nova_compute[249229]: 2026-01-23 10:28:10.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:28:10 compute-0 nova_compute[249229]: 2026-01-23 10:28:10.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:28:11 compute-0 nova_compute[249229]: 2026-01-23 10:28:11.289 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:28:11 compute-0 nova_compute[249229]: 2026-01-23 10:28:11.290 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquired lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:28:11 compute-0 nova_compute[249229]: 2026-01-23 10:28:11.291 249233 DEBUG nova.network.neutron [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 23 10:28:11 compute-0 nova_compute[249229]: 2026-01-23 10:28:11.291 249233 DEBUG nova.objects.instance [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2cb20ba8-5b68-4715-9848-bad345c47a31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:28:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:11.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 55 KiB/s wr, 85 op/s
Jan 23 10:28:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:12.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:13 compute-0 NetworkManager[48866]: <info>  [1769164093.1207] manager: (patch-provnet-995e8c2d-ca55-405c-bf26-97e408875e42-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Jan 23 10:28:13 compute-0 NetworkManager[48866]: <info>  [1769164093.1216] manager: (patch-br-int-to-provnet-995e8c2d-ca55-405c-bf26-97e408875e42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.122 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:13 compute-0 ovn_controller[151634]: 2026-01-23T10:28:13Z|00066|binding|INFO|Releasing lport 5cbba0a0-5f58-4d90-8d1c-814aceb1262d from this chassis (sb_readonly=0)
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.157 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:13 compute-0 ovn_controller[151634]: 2026-01-23T10:28:13Z|00067|binding|INFO|Releasing lport 5cbba0a0-5f58-4d90-8d1c-814aceb1262d from this chassis (sb_readonly=0)
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.162 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.236 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:13.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.514 249233 DEBUG nova.compute.manager [req-4fcddbef-3f23-4730-b977-56bf57d6daf5 req-64341441-fff6-41f6-a04d-7ec2f8d8cafe 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received event network-changed-5775c66b-2d08-4c9d-83fe-d4c692e19472 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.515 249233 DEBUG nova.compute.manager [req-4fcddbef-3f23-4730-b977-56bf57d6daf5 req-64341441-fff6-41f6-a04d-7ec2f8d8cafe 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Refreshing instance network info cache due to event network-changed-5775c66b-2d08-4c9d-83fe-d4c692e19472. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.515 249233 DEBUG oslo_concurrency.lockutils [req-4fcddbef-3f23-4730-b977-56bf57d6daf5 req-64341441-fff6-41f6-a04d-7ec2f8d8cafe 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:28:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:13.711Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.774 249233 DEBUG nova.network.neutron [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updating instance_info_cache with network_info: [{"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.800 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Releasing lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.800 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.800 249233 DEBUG oslo_concurrency.lockutils [req-4fcddbef-3f23-4730-b977-56bf57d6daf5 req-64341441-fff6-41f6-a04d-7ec2f8d8cafe 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.801 249233 DEBUG nova.network.neutron [req-4fcddbef-3f23-4730-b977-56bf57d6daf5 req-64341441-fff6-41f6-a04d-7ec2f8d8cafe 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Refreshing network info cache for port 5775c66b-2d08-4c9d-83fe-d4c692e19472 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.801 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.802 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.826 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.827 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.827 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.827 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:28:13 compute-0 nova_compute[249229]: 2026-01-23 10:28:13.828 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:28:13 compute-0 ceph-mon[74335]: pgmap v1051: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 55 KiB/s wr, 85 op/s
Jan 23 10:28:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/676391413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/615719513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:28:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:28:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494566907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.279 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.366 249233 DEBUG nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.367 249233 DEBUG nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 23 10:28:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:14.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.530 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.531 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4370MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.532 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.532 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.611 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Instance 2cb20ba8-5b68-4715-9848-bad345c47a31 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.612 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.612 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.667 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:28:14 compute-0 nova_compute[249229]: 2026-01-23 10:28:14.808 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:28:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563239400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:15 compute-0 nova_compute[249229]: 2026-01-23 10:28:15.123 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:28:15 compute-0 nova_compute[249229]: 2026-01-23 10:28:15.129 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:28:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:15.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:15 compute-0 nova_compute[249229]: 2026-01-23 10:28:15.716 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:28:15 compute-0 nova_compute[249229]: 2026-01-23 10:28:15.740 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:28:15 compute-0 nova_compute[249229]: 2026-01-23 10:28:15.741 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:28:16 compute-0 nova_compute[249229]: 2026-01-23 10:28:16.277 249233 DEBUG nova.network.neutron [req-4fcddbef-3f23-4730-b977-56bf57d6daf5 req-64341441-fff6-41f6-a04d-7ec2f8d8cafe 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updated VIF entry in instance network info cache for port 5775c66b-2d08-4c9d-83fe-d4c692e19472. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:28:16 compute-0 nova_compute[249229]: 2026-01-23 10:28:16.278 249233 DEBUG nova.network.neutron [req-4fcddbef-3f23-4730-b977-56bf57d6daf5 req-64341441-fff6-41f6-a04d-7ec2f8d8cafe 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updating instance_info_cache with network_info: [{"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:28:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:16.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:16 compute-0 nova_compute[249229]: 2026-01-23 10:28:16.651 249233 DEBUG oslo_concurrency.lockutils [req-4fcddbef-3f23-4730-b977-56bf57d6daf5 req-64341441-fff6-41f6-a04d-7ec2f8d8cafe 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:28:16 compute-0 nova_compute[249229]: 2026-01-23 10:28:16.656 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:28:16 compute-0 nova_compute[249229]: 2026-01-23 10:28:16.708 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:28:16 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3161038119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:16 compute-0 ceph-mon[74335]: pgmap v1052: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:28:16 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/494566907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:17.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:17 compute-0 nova_compute[249229]: 2026-01-23 10:28:17.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:28:17 compute-0 nova_compute[249229]: 2026-01-23 10:28:17.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:28:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:17.820Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Jan 23 10:28:18 compute-0 nova_compute[249229]: 2026-01-23 10:28:18.237 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:18.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/563239400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:18 compute-0 ceph-mon[74335]: pgmap v1053: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 10:28:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:18.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:19.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:19 compute-0 nova_compute[249229]: 2026-01-23 10:28:19.811 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:19] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Jan 23 10:28:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:19] "GET /metrics HTTP/1.1" 200 48561 "" "Prometheus/2.51.0"
Jan 23 10:28:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:28:20
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.data', 'vms', '.nfs', 'volumes', 'default.rgw.log', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:28:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:28:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:28:20 compute-0 ceph-mon[74335]: pgmap v1054: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Jan 23 10:28:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3824772010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:20.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:28:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:28:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:21.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:21 compute-0 ceph-mon[74335]: pgmap v1055: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Jan 23 10:28:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:28:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 92 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 827 KiB/s wr, 83 op/s
Jan 23 10:28:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:22.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:23 compute-0 nova_compute[249229]: 2026-01-23 10:28:23.239 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:23.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:23 compute-0 ceph-mon[74335]: pgmap v1056: 353 pgs: 353 active+clean; 92 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 827 KiB/s wr, 83 op/s
Jan 23 10:28:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:23.711Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 92 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 827 KiB/s wr, 14 op/s
Jan 23 10:28:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:24.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:24 compute-0 nova_compute[249229]: 2026-01-23 10:28:24.812 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:25.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:25 compute-0 ceph-mon[74335]: pgmap v1057: 353 pgs: 353 active+clean; 92 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 827 KiB/s wr, 14 op/s
Jan 23 10:28:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 95 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1000 KiB/s wr, 15 op/s
Jan 23 10:28:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:26.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:26 compute-0 sudo[270853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:28:26 compute-0 sudo[270853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:26 compute-0 sudo[270853]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:27 compute-0 ceph-mon[74335]: pgmap v1058: 353 pgs: 353 active+clean; 95 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1000 KiB/s wr, 15 op/s
Jan 23 10:28:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:27.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:27.821Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 103 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Jan 23 10:28:28 compute-0 nova_compute[249229]: 2026-01-23 10:28:28.242 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:28.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:28 compute-0 podman[270879]: 2026-01-23 10:28:28.553359647 +0000 UTC m=+0.080607359 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 10:28:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:28.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:28 compute-0 ceph-mon[74335]: pgmap v1059: 353 pgs: 353 active+clean; 103 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Jan 23 10:28:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:29.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:29 compute-0 nova_compute[249229]: 2026-01-23 10:28:29.816 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:29] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Jan 23 10:28:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:29] "GET /metrics HTTP/1.1" 200 48556 "" "Prometheus/2.51.0"
Jan 23 10:28:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 103 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Jan 23 10:28:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:30.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:31 compute-0 ovn_controller[151634]: 2026-01-23T10:28:31Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:d3:68 10.100.0.3
Jan 23 10:28:31 compute-0 ovn_controller[151634]: 2026-01-23T10:28:31Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:d3:68 10.100.0.3
Jan 23 10:28:31 compute-0 ceph-mon[74335]: pgmap v1060: 353 pgs: 353 active+clean; 103 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Jan 23 10:28:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:31.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:31 compute-0 sudo[270910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:28:31 compute-0 sudo[270910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:31 compute-0 sudo[270910]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:31 compute-0 sudo[270935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:28:31 compute-0 sudo[270935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 113 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 23 10:28:32 compute-0 sudo[270935]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:32.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:28:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:28:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:28:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:28:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 113 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 1.5 MiB/s wr, 37 op/s
Jan 23 10:28:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:28:32 compute-0 ceph-mon[74335]: pgmap v1061: 353 pgs: 353 active+clean; 113 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 23 10:28:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:28:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:28:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:28:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:28:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:28:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:28:32 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:28:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:28:32 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:28:32 compute-0 sudo[270993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:28:32 compute-0 sudo[270993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:32 compute-0 sudo[270993]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:33 compute-0 sudo[271018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:28:33 compute-0 sudo[271018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:33 compute-0 nova_compute[249229]: 2026-01-23 10:28:33.244 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:33 compute-0 podman[271086]: 2026-01-23 10:28:33.439886938 +0000 UTC m=+0.038829109 container create 50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:28:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:33.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:33 compute-0 systemd[1]: Started libpod-conmon-50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20.scope.
Jan 23 10:28:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:28:33 compute-0 podman[271086]: 2026-01-23 10:28:33.423072288 +0000 UTC m=+0.022014479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:28:33 compute-0 podman[271086]: 2026-01-23 10:28:33.521766335 +0000 UTC m=+0.120708526 container init 50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 10:28:33 compute-0 podman[271086]: 2026-01-23 10:28:33.528277741 +0000 UTC m=+0.127219912 container start 50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:28:33 compute-0 podman[271086]: 2026-01-23 10:28:33.531681658 +0000 UTC m=+0.130623839 container attach 50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_murdock, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:28:33 compute-0 unruffled_murdock[271102]: 167 167
Jan 23 10:28:33 compute-0 systemd[1]: libpod-50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20.scope: Deactivated successfully.
Jan 23 10:28:33 compute-0 conmon[271102]: conmon 50d52493a575190040be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20.scope/container/memory.events
Jan 23 10:28:33 compute-0 podman[271086]: 2026-01-23 10:28:33.53562341 +0000 UTC m=+0.134565581 container died 50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_murdock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9425ce46ea32df78e653a16c991c62c00a72633a232c04b5917bf58b855c0a72-merged.mount: Deactivated successfully.
Jan 23 10:28:33 compute-0 podman[271086]: 2026-01-23 10:28:33.571241417 +0000 UTC m=+0.170183588 container remove 50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_murdock, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 10:28:33 compute-0 systemd[1]: libpod-conmon-50d52493a575190040be87231337cd7df75d9225c2121a5c468a4b1c9cfabf20.scope: Deactivated successfully.
Jan 23 10:28:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:33.712Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:28:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:33.713Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:33 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:28:33 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:28:33 compute-0 ceph-mon[74335]: pgmap v1062: 353 pgs: 353 active+clean; 113 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 1.5 MiB/s wr, 37 op/s
Jan 23 10:28:33 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:28:33 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:28:33 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:28:33 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:28:33 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:28:33 compute-0 podman[271124]: 2026-01-23 10:28:33.743097271 +0000 UTC m=+0.048333840 container create 72782722c22c8087ad694e28e01b921827d47086a05af21bcf3531133e6731ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:28:33 compute-0 systemd[1]: Started libpod-conmon-72782722c22c8087ad694e28e01b921827d47086a05af21bcf3531133e6731ed.scope.
Jan 23 10:28:33 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60ab2f6e4c2f0277f9b239039c91008f44de312c9fa718188e2265b4fb6a756/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60ab2f6e4c2f0277f9b239039c91008f44de312c9fa718188e2265b4fb6a756/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60ab2f6e4c2f0277f9b239039c91008f44de312c9fa718188e2265b4fb6a756/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60ab2f6e4c2f0277f9b239039c91008f44de312c9fa718188e2265b4fb6a756/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60ab2f6e4c2f0277f9b239039c91008f44de312c9fa718188e2265b4fb6a756/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:33 compute-0 podman[271124]: 2026-01-23 10:28:33.81452006 +0000 UTC m=+0.119756639 container init 72782722c22c8087ad694e28e01b921827d47086a05af21bcf3531133e6731ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_goldberg, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 10:28:33 compute-0 podman[271124]: 2026-01-23 10:28:33.723960395 +0000 UTC m=+0.029197014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:28:33 compute-0 podman[271124]: 2026-01-23 10:28:33.822204749 +0000 UTC m=+0.127441318 container start 72782722c22c8087ad694e28e01b921827d47086a05af21bcf3531133e6731ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 10:28:33 compute-0 podman[271124]: 2026-01-23 10:28:33.824778313 +0000 UTC m=+0.130014882 container attach 72782722c22c8087ad694e28e01b921827d47086a05af21bcf3531133e6731ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:28:34 compute-0 interesting_goldberg[271140]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:28:34 compute-0 interesting_goldberg[271140]: --> All data devices are unavailable
Jan 23 10:28:34 compute-0 systemd[1]: libpod-72782722c22c8087ad694e28e01b921827d47086a05af21bcf3531133e6731ed.scope: Deactivated successfully.
Jan 23 10:28:34 compute-0 podman[271124]: 2026-01-23 10:28:34.181841613 +0000 UTC m=+0.487078182 container died 72782722c22c8087ad694e28e01b921827d47086a05af21bcf3531133e6731ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_goldberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 10:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f60ab2f6e4c2f0277f9b239039c91008f44de312c9fa718188e2265b4fb6a756-merged.mount: Deactivated successfully.
Jan 23 10:28:34 compute-0 podman[271124]: 2026-01-23 10:28:34.230587714 +0000 UTC m=+0.535824283 container remove 72782722c22c8087ad694e28e01b921827d47086a05af21bcf3531133e6731ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_goldberg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:28:34 compute-0 systemd[1]: libpod-conmon-72782722c22c8087ad694e28e01b921827d47086a05af21bcf3531133e6731ed.scope: Deactivated successfully.
Jan 23 10:28:34 compute-0 sudo[271018]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:34 compute-0 sudo[271169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:28:34 compute-0 sudo[271169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:34 compute-0 sudo[271169]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:34 compute-0 sudo[271194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:28:34 compute-0 sudo[271194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:34.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 113 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 1.5 MiB/s wr, 37 op/s
Jan 23 10:28:34 compute-0 nova_compute[249229]: 2026-01-23 10:28:34.818 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:34 compute-0 podman[271262]: 2026-01-23 10:28:34.855560911 +0000 UTC m=+0.039818908 container create a2fc102ce40b4e9a4ab830d4ca5bd1052939c9b015343b2567d2340fa62b288a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:28:34 compute-0 systemd[1]: Started libpod-conmon-a2fc102ce40b4e9a4ab830d4ca5bd1052939c9b015343b2567d2340fa62b288a.scope.
Jan 23 10:28:34 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:28:34 compute-0 podman[271262]: 2026-01-23 10:28:34.838797242 +0000 UTC m=+0.023055259 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:28:34 compute-0 podman[271262]: 2026-01-23 10:28:34.938737355 +0000 UTC m=+0.122995392 container init a2fc102ce40b4e9a4ab830d4ca5bd1052939c9b015343b2567d2340fa62b288a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hoover, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:28:34 compute-0 podman[271262]: 2026-01-23 10:28:34.946024883 +0000 UTC m=+0.130282880 container start a2fc102ce40b4e9a4ab830d4ca5bd1052939c9b015343b2567d2340fa62b288a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hoover, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:28:34 compute-0 podman[271262]: 2026-01-23 10:28:34.949333567 +0000 UTC m=+0.133591614 container attach a2fc102ce40b4e9a4ab830d4ca5bd1052939c9b015343b2567d2340fa62b288a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:28:34 compute-0 distracted_hoover[271278]: 167 167
Jan 23 10:28:34 compute-0 systemd[1]: libpod-a2fc102ce40b4e9a4ab830d4ca5bd1052939c9b015343b2567d2340fa62b288a.scope: Deactivated successfully.
Jan 23 10:28:34 compute-0 podman[271262]: 2026-01-23 10:28:34.951425187 +0000 UTC m=+0.135683204 container died a2fc102ce40b4e9a4ab830d4ca5bd1052939c9b015343b2567d2340fa62b288a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hoover, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 23 10:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dd7d5404d2515d5463148bc9844be4d2ad8c92d7394549021ba782dd7c53dca-merged.mount: Deactivated successfully.
Jan 23 10:28:34 compute-0 podman[271262]: 2026-01-23 10:28:34.991273655 +0000 UTC m=+0.175531652 container remove a2fc102ce40b4e9a4ab830d4ca5bd1052939c9b015343b2567d2340fa62b288a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:28:34 compute-0 systemd[1]: libpod-conmon-a2fc102ce40b4e9a4ab830d4ca5bd1052939c9b015343b2567d2340fa62b288a.scope: Deactivated successfully.
Jan 23 10:28:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:28:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:28:35 compute-0 podman[271303]: 2026-01-23 10:28:35.215903295 +0000 UTC m=+0.062949278 container create 6cf41d6e26ab32ef1d7657ee0093beb06f706211fd7eba1b2738abb8bc2f8919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_herschel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 10:28:35 compute-0 systemd[1]: Started libpod-conmon-6cf41d6e26ab32ef1d7657ee0093beb06f706211fd7eba1b2738abb8bc2f8919.scope.
Jan 23 10:28:35 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34eea82c6820cf81a2a464791b812e2e3b2a3dadc8f4537906e1134adf4e404/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34eea82c6820cf81a2a464791b812e2e3b2a3dadc8f4537906e1134adf4e404/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34eea82c6820cf81a2a464791b812e2e3b2a3dadc8f4537906e1134adf4e404/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d34eea82c6820cf81a2a464791b812e2e3b2a3dadc8f4537906e1134adf4e404/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:35 compute-0 podman[271303]: 2026-01-23 10:28:35.283729771 +0000 UTC m=+0.130775774 container init 6cf41d6e26ab32ef1d7657ee0093beb06f706211fd7eba1b2738abb8bc2f8919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Jan 23 10:28:35 compute-0 podman[271303]: 2026-01-23 10:28:35.193761233 +0000 UTC m=+0.040807306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:28:35 compute-0 podman[271303]: 2026-01-23 10:28:35.295621561 +0000 UTC m=+0.142667534 container start 6cf41d6e26ab32ef1d7657ee0093beb06f706211fd7eba1b2738abb8bc2f8919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_herschel, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 10:28:35 compute-0 podman[271303]: 2026-01-23 10:28:35.299064149 +0000 UTC m=+0.146110152 container attach 6cf41d6e26ab32ef1d7657ee0093beb06f706211fd7eba1b2738abb8bc2f8919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 10:28:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:28:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:35.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:28:35 compute-0 jovial_herschel[271320]: {
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:     "1": [
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:         {
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "devices": [
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "/dev/loop3"
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             ],
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "lv_name": "ceph_lv0",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "lv_size": "21470642176",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "name": "ceph_lv0",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "tags": {
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.cluster_name": "ceph",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.crush_device_class": "",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.encrypted": "0",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.osd_id": "1",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.type": "block",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.vdo": "0",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:                 "ceph.with_tpm": "0"
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             },
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "type": "block",
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:             "vg_name": "ceph_vg0"
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:         }
Jan 23 10:28:35 compute-0 jovial_herschel[271320]:     ]
Jan 23 10:28:35 compute-0 jovial_herschel[271320]: }
Jan 23 10:28:35 compute-0 systemd[1]: libpod-6cf41d6e26ab32ef1d7657ee0093beb06f706211fd7eba1b2738abb8bc2f8919.scope: Deactivated successfully.
Jan 23 10:28:35 compute-0 podman[271303]: 2026-01-23 10:28:35.587954834 +0000 UTC m=+0.435000837 container died 6cf41d6e26ab32ef1d7657ee0093beb06f706211fd7eba1b2738abb8bc2f8919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_herschel, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d34eea82c6820cf81a2a464791b812e2e3b2a3dadc8f4537906e1134adf4e404-merged.mount: Deactivated successfully.
Jan 23 10:28:35 compute-0 podman[271303]: 2026-01-23 10:28:35.625979249 +0000 UTC m=+0.473025232 container remove 6cf41d6e26ab32ef1d7657ee0093beb06f706211fd7eba1b2738abb8bc2f8919 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_herschel, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:28:35 compute-0 systemd[1]: libpod-conmon-6cf41d6e26ab32ef1d7657ee0093beb06f706211fd7eba1b2738abb8bc2f8919.scope: Deactivated successfully.
Jan 23 10:28:35 compute-0 sudo[271194]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:35 compute-0 sudo[271340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:28:35 compute-0 sudo[271340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:35 compute-0 sudo[271340]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:28:35 compute-0 sudo[271365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:28:35 compute-0 sudo[271365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:36 compute-0 podman[271430]: 2026-01-23 10:28:36.194754572 +0000 UTC m=+0.046985672 container create 5224e91ef7806ef558c6f42a94ef4ee8adaebb878f016b75dc1186bf7723d8a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:28:36 compute-0 systemd[1]: Started libpod-conmon-5224e91ef7806ef558c6f42a94ef4ee8adaebb878f016b75dc1186bf7723d8a5.scope.
Jan 23 10:28:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:28:36 compute-0 podman[271430]: 2026-01-23 10:28:36.260002344 +0000 UTC m=+0.112233454 container init 5224e91ef7806ef558c6f42a94ef4ee8adaebb878f016b75dc1186bf7723d8a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 10:28:36 compute-0 podman[271430]: 2026-01-23 10:28:36.265859101 +0000 UTC m=+0.118090201 container start 5224e91ef7806ef558c6f42a94ef4ee8adaebb878f016b75dc1186bf7723d8a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 10:28:36 compute-0 podman[271430]: 2026-01-23 10:28:36.269824454 +0000 UTC m=+0.122055584 container attach 5224e91ef7806ef558c6f42a94ef4ee8adaebb878f016b75dc1186bf7723d8a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:28:36 compute-0 reverent_lederberg[271446]: 167 167
Jan 23 10:28:36 compute-0 systemd[1]: libpod-5224e91ef7806ef558c6f42a94ef4ee8adaebb878f016b75dc1186bf7723d8a5.scope: Deactivated successfully.
Jan 23 10:28:36 compute-0 podman[271430]: 2026-01-23 10:28:36.270973357 +0000 UTC m=+0.123204477 container died 5224e91ef7806ef558c6f42a94ef4ee8adaebb878f016b75dc1186bf7723d8a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:28:36 compute-0 podman[271430]: 2026-01-23 10:28:36.177795608 +0000 UTC m=+0.030026728 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-16c536522d5e21852da665439ccfae698c7d11b6c54f59bf75bdc6a8b681ad4d-merged.mount: Deactivated successfully.
Jan 23 10:28:36 compute-0 podman[271430]: 2026-01-23 10:28:36.299952644 +0000 UTC m=+0.152183744 container remove 5224e91ef7806ef558c6f42a94ef4ee8adaebb878f016b75dc1186bf7723d8a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 10:28:36 compute-0 systemd[1]: libpod-conmon-5224e91ef7806ef558c6f42a94ef4ee8adaebb878f016b75dc1186bf7723d8a5.scope: Deactivated successfully.
Jan 23 10:28:36 compute-0 nova_compute[249229]: 2026-01-23 10:28:36.353 249233 INFO nova.compute.manager [None req-d9471d31-74bc-4497-8082-9dfdad1c439f f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Get console output
Jan 23 10:28:36 compute-0 nova_compute[249229]: 2026-01-23 10:28:36.364 255486 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 23 10:28:36 compute-0 podman[271469]: 2026-01-23 10:28:36.452498368 +0000 UTC m=+0.037586034 container create f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:28:36 compute-0 systemd[1]: Started libpod-conmon-f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648.scope.
Jan 23 10:28:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:36.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:36 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18c2ffe7bc1d029fe53b484be14f532599a3afaf97a6c3887d87a13f400d0ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:36 compute-0 podman[271469]: 2026-01-23 10:28:36.437113708 +0000 UTC m=+0.022201394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18c2ffe7bc1d029fe53b484be14f532599a3afaf97a6c3887d87a13f400d0ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18c2ffe7bc1d029fe53b484be14f532599a3afaf97a6c3887d87a13f400d0ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18c2ffe7bc1d029fe53b484be14f532599a3afaf97a6c3887d87a13f400d0ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:28:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 246 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Jan 23 10:28:36 compute-0 podman[271469]: 2026-01-23 10:28:36.552823891 +0000 UTC m=+0.137911577 container init f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dirac, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:28:36 compute-0 podman[271469]: 2026-01-23 10:28:36.559589244 +0000 UTC m=+0.144676910 container start f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:28:36 compute-0 podman[271469]: 2026-01-23 10:28:36.56364509 +0000 UTC m=+0.148732776 container attach f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 10:28:36 compute-0 ceph-mon[74335]: pgmap v1063: 353 pgs: 353 active+clean; 113 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 1.5 MiB/s wr, 37 op/s
Jan 23 10:28:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:37 compute-0 lvm[271560]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:28:37 compute-0 lvm[271560]: VG ceph_vg0 finished
Jan 23 10:28:37 compute-0 jovial_dirac[271485]: {}
Jan 23 10:28:37 compute-0 systemd[1]: libpod-f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648.scope: Deactivated successfully.
Jan 23 10:28:37 compute-0 systemd[1]: libpod-f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648.scope: Consumed 1.066s CPU time.
Jan 23 10:28:37 compute-0 podman[271469]: 2026-01-23 10:28:37.288204208 +0000 UTC m=+0.873291874 container died f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dirac, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 10:28:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 10:28:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e18c2ffe7bc1d029fe53b484be14f532599a3afaf97a6c3887d87a13f400d0ef-merged.mount: Deactivated successfully.
Jan 23 10:28:37 compute-0 podman[271469]: 2026-01-23 10:28:37.344304439 +0000 UTC m=+0.929392105 container remove f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 10:28:37 compute-0 systemd[1]: libpod-conmon-f9ba6bd7c52a3dcc66aeb286048c9c98d84e8b087f8cc1a4f47385c53ee88648.scope: Deactivated successfully.
Jan 23 10:28:37 compute-0 sudo[271365]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:28:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:28:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:37.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:28:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:28:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:28:37 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:28:37 compute-0 sudo[271577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:28:37 compute-0 sudo[271577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:37 compute-0 sudo[271577]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:37 compute-0 ovn_controller[151634]: 2026-01-23T10:28:37Z|00068|binding|INFO|Releasing lport 5cbba0a0-5f58-4d90-8d1c-814aceb1262d from this chassis (sb_readonly=0)
Jan 23 10:28:37 compute-0 nova_compute[249229]: 2026-01-23 10:28:37.718 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:37 compute-0 ovn_controller[151634]: 2026-01-23T10:28:37Z|00069|binding|INFO|Releasing lport 5cbba0a0-5f58-4d90-8d1c-814aceb1262d from this chassis (sb_readonly=0)
Jan 23 10:28:37 compute-0 nova_compute[249229]: 2026-01-23 10:28:37.777 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:37.822Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:37 compute-0 ceph-mon[74335]: pgmap v1064: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 246 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Jan 23 10:28:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:28:37 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:28:38 compute-0 nova_compute[249229]: 2026-01-23 10:28:38.247 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:38.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 244 KiB/s rd, 662 KiB/s wr, 43 op/s
Jan 23 10:28:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:38.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:28:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:38.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:28:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:38.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:28:38 compute-0 nova_compute[249229]: 2026-01-23 10:28:38.947 249233 INFO nova.compute.manager [None req-04223fc6-55f8-4e41-bf7b-49e884e666ef f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Get console output
Jan 23 10:28:38 compute-0 nova_compute[249229]: 2026-01-23 10:28:38.955 255486 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 23 10:28:39 compute-0 ceph-mon[74335]: pgmap v1065: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 244 KiB/s rd, 662 KiB/s wr, 43 op/s
Jan 23 10:28:39 compute-0 NetworkManager[48866]: <info>  [1769164119.4546] manager: (patch-provnet-995e8c2d-ca55-405c-bf26-97e408875e42-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 23 10:28:39 compute-0 nova_compute[249229]: 2026-01-23 10:28:39.454 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:39 compute-0 NetworkManager[48866]: <info>  [1769164119.4560] manager: (patch-br-int-to-provnet-995e8c2d-ca55-405c-bf26-97e408875e42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 23 10:28:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:39.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:39 compute-0 ovn_controller[151634]: 2026-01-23T10:28:39Z|00070|binding|INFO|Releasing lport 5cbba0a0-5f58-4d90-8d1c-814aceb1262d from this chassis (sb_readonly=0)
Jan 23 10:28:39 compute-0 nova_compute[249229]: 2026-01-23 10:28:39.515 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:39 compute-0 nova_compute[249229]: 2026-01-23 10:28:39.519 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:39 compute-0 podman[271605]: 2026-01-23 10:28:39.569488584 +0000 UTC m=+0.065071578 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:28:39 compute-0 nova_compute[249229]: 2026-01-23 10:28:39.794 249233 INFO nova.compute.manager [None req-0ccc9e14-04bf-4f5d-941d-d783e1ef520e f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Get console output
Jan 23 10:28:39 compute-0 nova_compute[249229]: 2026-01-23 10:28:39.798 255486 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 23 10:28:39 compute-0 nova_compute[249229]: 2026-01-23 10:28:39.820 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:39] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Jan 23 10:28:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:39] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Jan 23 10:28:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 244 KiB/s rd, 661 KiB/s wr, 43 op/s
Jan 23 10:28:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:40.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.752 249233 DEBUG nova.compute.manager [req-34ffa6d5-e183-4866-8ca1-49a10a925ec7 req-e05d6e90-93cf-4589-acff-9aa2d7be3afa 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received event network-changed-5775c66b-2d08-4c9d-83fe-d4c692e19472 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.753 249233 DEBUG nova.compute.manager [req-34ffa6d5-e183-4866-8ca1-49a10a925ec7 req-e05d6e90-93cf-4589-acff-9aa2d7be3afa 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Refreshing instance network info cache due to event network-changed-5775c66b-2d08-4c9d-83fe-d4c692e19472. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.754 249233 DEBUG oslo_concurrency.lockutils [req-34ffa6d5-e183-4866-8ca1-49a10a925ec7 req-e05d6e90-93cf-4589-acff-9aa2d7be3afa 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.755 249233 DEBUG oslo_concurrency.lockutils [req-34ffa6d5-e183-4866-8ca1-49a10a925ec7 req-e05d6e90-93cf-4589-acff-9aa2d7be3afa 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquired lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.755 249233 DEBUG nova.network.neutron [req-34ffa6d5-e183-4866-8ca1-49a10a925ec7 req-e05d6e90-93cf-4589-acff-9aa2d7be3afa 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Refreshing network info cache for port 5775c66b-2d08-4c9d-83fe-d4c692e19472 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.841 249233 DEBUG oslo_concurrency.lockutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "2cb20ba8-5b68-4715-9848-bad345c47a31" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.842 249233 DEBUG oslo_concurrency.lockutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.842 249233 DEBUG oslo_concurrency.lockutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.843 249233 DEBUG oslo_concurrency.lockutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.844 249233 DEBUG oslo_concurrency.lockutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.846 249233 INFO nova.compute.manager [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Terminating instance
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.850 249233 DEBUG nova.compute.manager [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 23 10:28:40 compute-0 kernel: tap5775c66b-2d (unregistering): left promiscuous mode
Jan 23 10:28:40 compute-0 NetworkManager[48866]: <info>  [1769164120.9145] device (tap5775c66b-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 10:28:40 compute-0 ovn_controller[151634]: 2026-01-23T10:28:40Z|00071|binding|INFO|Releasing lport 5775c66b-2d08-4c9d-83fe-d4c692e19472 from this chassis (sb_readonly=0)
Jan 23 10:28:40 compute-0 ovn_controller[151634]: 2026-01-23T10:28:40Z|00072|binding|INFO|Setting lport 5775c66b-2d08-4c9d-83fe-d4c692e19472 down in Southbound
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.922 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:40 compute-0 ovn_controller[151634]: 2026-01-23T10:28:40Z|00073|binding|INFO|Removing iface tap5775c66b-2d ovn-installed in OVS
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.924 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:40 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:40.932 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:d3:68 10.100.0.3'], port_security=['fa:16:3e:07:d3:68 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2cb20ba8-5b68-4715-9848-bad345c47a31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64d8458c-fab0-469a-aa4f-0a8a3ecc755f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acc90003f0f7412b8daf8a1b6f0f1494', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77c410d2-c19a-410d-827c-0cf5352f9f39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51885346-a2e1-48ee-accb-48f791330df1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>], logical_port=5775c66b-2d08-4c9d-83fe-d4c692e19472) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb761ae640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:28:40 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:40.934 161921 INFO neutron.agent.ovn.metadata.agent [-] Port 5775c66b-2d08-4c9d-83fe-d4c692e19472 in datapath 64d8458c-fab0-469a-aa4f-0a8a3ecc755f unbound from our chassis
Jan 23 10:28:40 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:40.935 161921 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64d8458c-fab0-469a-aa4f-0a8a3ecc755f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 23 10:28:40 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:40.937 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b3ee63-a9b4-4eb5-be47-55f2af1a223d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:40 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:40.937 161921 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f namespace which is not needed anymore
Jan 23 10:28:40 compute-0 nova_compute[249229]: 2026-01-23 10:28:40.945 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:40 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 23 10:28:40 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000d.scope: Consumed 14.976s CPU time.
Jan 23 10:28:40 compute-0 systemd-machined[216411]: Machine qemu-4-instance-0000000d terminated.
Jan 23 10:28:41 compute-0 neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f[270683]: [NOTICE]   (270687) : haproxy version is 2.8.14-c23fe91
Jan 23 10:28:41 compute-0 neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f[270683]: [NOTICE]   (270687) : path to executable is /usr/sbin/haproxy
Jan 23 10:28:41 compute-0 neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f[270683]: [WARNING]  (270687) : Exiting Master process...
Jan 23 10:28:41 compute-0 neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f[270683]: [ALERT]    (270687) : Current worker (270689) exited with code 143 (Terminated)
Jan 23 10:28:41 compute-0 neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f[270683]: [WARNING]  (270687) : All workers exited. Exiting... (0)
Jan 23 10:28:41 compute-0 systemd[1]: libpod-eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e.scope: Deactivated successfully.
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.072 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:41 compute-0 podman[271652]: 2026-01-23 10:28:41.0753391 +0000 UTC m=+0.045323684 container died eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.079 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.096 249233 INFO nova.virt.libvirt.driver [-] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Instance destroyed successfully.
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.096 249233 DEBUG nova.objects.instance [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lazy-loading 'resources' on Instance uuid 2cb20ba8-5b68-4715-9848-bad345c47a31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 23 10:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e-userdata-shm.mount: Deactivated successfully.
Jan 23 10:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-474a621dad5e73f890f15095c0cd73edf2db8f99eacd8cb0d36787bca84e72f9-merged.mount: Deactivated successfully.
Jan 23 10:28:41 compute-0 podman[271652]: 2026-01-23 10:28:41.117551715 +0000 UTC m=+0.087536289 container cleanup eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.120 249233 DEBUG nova.virt.libvirt.vif [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:27:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-823186335',display_name='tempest-TestNetworkBasicOps-server-823186335',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-823186335',id=13,image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALoQy92z4DUQMTV+DqFHSml8UJUaVcff/mSaypHwFMH7pDs09vJt3HuFEDGESi4DTro4DoZamY+RqX7NCM6Mkdp29d9ri0gEUF5j3pATC3bt0D18Sus1fVbyPJqdBijKQ==',key_name='tempest-TestNetworkBasicOps-1062461069',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:28:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acc90003f0f7412b8daf8a1b6f0f1494',ramdisk_id='',reservation_id='r-2q6ugufv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='271ec98e-d058-421b-bbfb-4b4a5954c90a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-655467240',owner_user_name='tempest-TestNetworkBasicOps-655467240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:28:07Z,user_data=None,user_id='f459c4e71e6c47acb0f8aaf83f34695e',uuid=2cb20ba8-5b68-4715-9848-bad345c47a31,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.120 249233 DEBUG nova.network.os_vif_util [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converting VIF {"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.121 249233 DEBUG nova.network.os_vif_util [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:d3:68,bridge_name='br-int',has_traffic_filtering=True,id=5775c66b-2d08-4c9d-83fe-d4c692e19472,network=Network(64d8458c-fab0-469a-aa4f-0a8a3ecc755f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5775c66b-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.121 249233 DEBUG os_vif [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:d3:68,bridge_name='br-int',has_traffic_filtering=True,id=5775c66b-2d08-4c9d-83fe-d4c692e19472,network=Network(64d8458c-fab0-469a-aa4f-0a8a3ecc755f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5775c66b-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.123 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.123 249233 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5775c66b-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:28:41 compute-0 systemd[1]: libpod-conmon-eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e.scope: Deactivated successfully.
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.125 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.126 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.129 249233 INFO os_vif [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:d3:68,bridge_name='br-int',has_traffic_filtering=True,id=5775c66b-2d08-4c9d-83fe-d4c692e19472,network=Network(64d8458c-fab0-469a-aa4f-0a8a3ecc755f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5775c66b-2d')
Jan 23 10:28:41 compute-0 podman[271689]: 2026-01-23 10:28:41.178669019 +0000 UTC m=+0.039650822 container remove eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 10:28:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:41.183 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[16dc48d4-f1e5-47be-aa6d-cca7d42dd6f0]: (4, ('Fri Jan 23 10:28:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f (eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e)\needf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e\nFri Jan 23 10:28:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f (eedf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e)\needf225d072117fa873404c5c3ca917eaad295469b916f86d736409371dda85e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:41.185 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7bfd68-9bd3-4e63-a3f6-2ccb5e0c6450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:41.186 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64d8458c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.187 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:41 compute-0 kernel: tap64d8458c-f0: left promiscuous mode
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.189 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:41.192 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0ff353-9ba0-4516-b7c2-6783a845c57f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:41 compute-0 nova_compute[249229]: 2026-01-23 10:28:41.202 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:41.209 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[99ea14be-b447-4f3e-a7b3-a3467316203b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:41.211 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[20d292d1-8021-4771-b7c5-93d6b46e1681]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:41.227 255218 DEBUG oslo.privsep.daemon [-] privsep: reply[09a46e6b-de89-45ba-bbfc-715e9cb277b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520673, 'reachable_time': 23453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271723, 'error': None, 'target': 'ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:41.230 162436 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64d8458c-fab0-469a-aa4f-0a8a3ecc755f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 23 10:28:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d64d8458c\x2dfab0\x2d469a\x2daa4f\x2d0a8a3ecc755f.mount: Deactivated successfully.
Jan 23 10:28:41 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:41.231 162436 DEBUG oslo.privsep.daemon [-] privsep: reply[68406190-2916-4442-9a94-3afa0fa2bd84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 23 10:28:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:41.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:41 compute-0 ceph-mon[74335]: pgmap v1066: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 244 KiB/s rd, 661 KiB/s wr, 43 op/s
Jan 23 10:28:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.089 249233 DEBUG nova.network.neutron [req-34ffa6d5-e183-4866-8ca1-49a10a925ec7 req-e05d6e90-93cf-4589-acff-9aa2d7be3afa 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updated VIF entry in instance network info cache for port 5775c66b-2d08-4c9d-83fe-d4c692e19472. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.090 249233 DEBUG nova.network.neutron [req-34ffa6d5-e183-4866-8ca1-49a10a925ec7 req-e05d6e90-93cf-4589-acff-9aa2d7be3afa 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updating instance_info_cache with network_info: [{"id": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "address": "fa:16:3e:07:d3:68", "network": {"id": "64d8458c-fab0-469a-aa4f-0a8a3ecc755f", "bridge": "br-int", "label": "tempest-network-smoke--1602622259", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acc90003f0f7412b8daf8a1b6f0f1494", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5775c66b-2d", "ovs_interfaceid": "5775c66b-2d08-4c9d-83fe-d4c692e19472", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.113 249233 DEBUG oslo_concurrency.lockutils [req-34ffa6d5-e183-4866-8ca1-49a10a925ec7 req-e05d6e90-93cf-4589-acff-9aa2d7be3afa 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Releasing lock "refresh_cache-2cb20ba8-5b68-4715-9848-bad345c47a31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 23 10:28:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 59 KiB/s wr, 15 op/s
Jan 23 10:28:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:42.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.828 249233 DEBUG nova.compute.manager [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received event network-vif-unplugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.828 249233 DEBUG oslo_concurrency.lockutils [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.828 249233 DEBUG oslo_concurrency.lockutils [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.829 249233 DEBUG oslo_concurrency.lockutils [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.829 249233 DEBUG nova.compute.manager [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] No waiting events found dispatching network-vif-unplugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.829 249233 DEBUG nova.compute.manager [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received event network-vif-unplugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.829 249233 DEBUG nova.compute.manager [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received event network-vif-plugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.830 249233 DEBUG oslo_concurrency.lockutils [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Acquiring lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.830 249233 DEBUG oslo_concurrency.lockutils [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.830 249233 DEBUG oslo_concurrency.lockutils [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.830 249233 DEBUG nova.compute.manager [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] No waiting events found dispatching network-vif-plugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.831 249233 WARNING nova.compute.manager [req-9c86274b-632a-4466-834a-ca9e19fed29e req-1c3cc8d7-a887-491f-ad71-91dcaea17356 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received unexpected event network-vif-plugged-5775c66b-2d08-4c9d-83fe-d4c692e19472 for instance with vm_state active and task_state deleting.
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.932 249233 INFO nova.virt.libvirt.driver [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Deleting instance files /var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31_del
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.933 249233 INFO nova.virt.libvirt.driver [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Deletion of /var/lib/nova/instances/2cb20ba8-5b68-4715-9848-bad345c47a31_del complete
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.978 249233 INFO nova.compute.manager [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Took 2.13 seconds to destroy the instance on the hypervisor.
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.979 249233 DEBUG oslo.service.loopingcall [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.979 249233 DEBUG nova.compute.manager [-] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 23 10:28:42 compute-0 nova_compute[249229]: 2026-01-23 10:28:42.979 249233 DEBUG nova.network.neutron [-] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 23 10:28:43 compute-0 nova_compute[249229]: 2026-01-23 10:28:43.248 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:43.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:43.714Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:44 compute-0 ceph-mon[74335]: pgmap v1067: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 59 KiB/s wr, 15 op/s
Jan 23 10:28:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 52 KiB/s wr, 13 op/s
Jan 23 10:28:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:44.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:45.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.022 249233 DEBUG nova.network.neutron [-] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.068 249233 INFO nova.compute.manager [-] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Took 3.09 seconds to deallocate network for instance.
Jan 23 10:28:46 compute-0 ceph-mon[74335]: pgmap v1068: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 52 KiB/s wr, 13 op/s
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.107 249233 DEBUG nova.compute.manager [req-5d38ee25-0f2e-4090-a1b2-68e6b21af747 req-6256d829-63cf-41bb-9371-6e50e2e45629 56a5c2dc076c4e2489c82e9feac864fb 3b334319b2184689ac0dd92f207d57b0 - - default default] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Received event network-vif-deleted-5775c66b-2d08-4c9d-83fe-d4c692e19472 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.127 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.131 249233 DEBUG oslo_concurrency.lockutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.132 249233 DEBUG oslo_concurrency.lockutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.196 249233 DEBUG oslo_concurrency.processutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:28:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:28:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:46.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:28:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 53 KiB/s wr, 41 op/s
Jan 23 10:28:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:28:46 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848999722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.697 249233 DEBUG oslo_concurrency.processutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.704 249233 DEBUG nova.compute.provider_tree [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.717 249233 DEBUG nova.scheduler.client.report [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.734 249233 DEBUG oslo_concurrency.lockutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.761 249233 INFO nova.scheduler.client.report [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Deleted allocations for instance 2cb20ba8-5b68-4715-9848-bad345c47a31
Jan 23 10:28:46 compute-0 nova_compute[249229]: 2026-01-23 10:28:46.825 249233 DEBUG oslo_concurrency.lockutils [None req-4066f038-1840-4699-84db-1d7ba943746b f459c4e71e6c47acb0f8aaf83f34695e acc90003f0f7412b8daf8a1b6f0f1494 - - default default] Lock "2cb20ba8-5b68-4715-9848-bad345c47a31" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:47 compute-0 sudo[271753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:28:47 compute-0 sudo[271753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:28:47 compute-0 sudo[271753]: pam_unix(sudo:session): session closed for user root
Jan 23 10:28:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3848999722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:28:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:47.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:47.823Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:48 compute-0 nova_compute[249229]: 2026-01-23 10:28:48.250 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:48 compute-0 ceph-mon[74335]: pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 53 KiB/s wr, 41 op/s
Jan 23 10:28:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:28:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:28:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:48.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:28:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:48.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:49.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:49 compute-0 ceph-mon[74335]: pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:28:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/292376593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:28:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/292376593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:28:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:49] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Jan 23 10:28:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:49] "GET /metrics HTTP/1.1" 200 48565 "" "Prometheus/2.51.0"
Jan 23 10:28:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:28:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:28:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:28:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:28:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:28:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:28:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:28:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:28:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:28:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:50.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:28:50 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:50.976 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:28:50 compute-0 nova_compute[249229]: 2026-01-23 10:28:50.977 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:50 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:50.977 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:28:51 compute-0 nova_compute[249229]: 2026-01-23 10:28:51.129 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:51 compute-0 nova_compute[249229]: 2026-01-23 10:28:51.278 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:51 compute-0 nova_compute[249229]: 2026-01-23 10:28:51.366 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:28:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:51.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:28:51 compute-0 ceph-mon[74335]: pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:28:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:28:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:52.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.759551) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164132759724, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2138, "num_deletes": 251, "total_data_size": 4368358, "memory_usage": 4433216, "flush_reason": "Manual Compaction"}
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164132791460, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4222889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29718, "largest_seqno": 31855, "table_properties": {"data_size": 4213075, "index_size": 6244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19896, "raw_average_key_size": 20, "raw_value_size": 4193741, "raw_average_value_size": 4305, "num_data_blocks": 264, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163913, "oldest_key_time": 1769163913, "file_creation_time": 1769164132, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 31970 microseconds, and 11180 cpu microseconds.
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.791514) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4222889 bytes OK
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.791562) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.794459) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.794477) EVENT_LOG_v1 {"time_micros": 1769164132794471, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.794495) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4359693, prev total WAL file size 4359693, number of live WAL files 2.
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.795660) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4123KB)], [65(12MB)]
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164132795748, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16839338, "oldest_snapshot_seqno": -1}
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6333 keys, 14607092 bytes, temperature: kUnknown
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164132916912, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14607092, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14564521, "index_size": 25629, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 162262, "raw_average_key_size": 25, "raw_value_size": 14450179, "raw_average_value_size": 2281, "num_data_blocks": 1024, "num_entries": 6333, "num_filter_entries": 6333, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769164132, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.917578) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14607092 bytes
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.919209) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.9 rd, 120.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.0 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.4) write-amplify(3.5) OK, records in: 6851, records dropped: 518 output_compression: NoCompression
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.919303) EVENT_LOG_v1 {"time_micros": 1769164132919240, "job": 36, "event": "compaction_finished", "compaction_time_micros": 121249, "compaction_time_cpu_micros": 37273, "output_level": 6, "num_output_files": 1, "total_output_size": 14607092, "num_input_records": 6851, "num_output_records": 6333, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164132920929, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164132925129, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.795575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.925214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.925220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.925221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.925222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:28:52 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:28:52.925224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:28:53 compute-0 nova_compute[249229]: 2026-01-23 10:28:53.253 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:53.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:53.714Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:53 compute-0 ceph-mon[74335]: pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 10:28:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:28:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:55.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:56 compute-0 nova_compute[249229]: 2026-01-23 10:28:56.093 249233 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164121.0918806, 2cb20ba8-5b68-4715-9848-bad345c47a31 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 23 10:28:56 compute-0 nova_compute[249229]: 2026-01-23 10:28:56.093 249233 INFO nova.compute.manager [-] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] VM Stopped (Lifecycle Event)
Jan 23 10:28:56 compute-0 nova_compute[249229]: 2026-01-23 10:28:56.131 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:56 compute-0 nova_compute[249229]: 2026-01-23 10:28:56.289 249233 DEBUG nova.compute.manager [None req-bce4a6ce-d509-4750-af35-441376e1f52b - - - - - -] [instance: 2cb20ba8-5b68-4715-9848-bad345c47a31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 23 10:28:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:28:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:56.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:28:57 compute-0 ceph-mon[74335]: pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 10:28:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:57.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:57.824Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:57 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:57.981 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:28:58 compute-0 nova_compute[249229]: 2026-01-23 10:28:58.255 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:28:58 compute-0 ceph-mon[74335]: pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 10:28:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:28:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:58.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:28:58.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:28:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:28:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:28:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:59.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:28:59 compute-0 podman[271791]: 2026-01-23 10:28:59.593722223 +0000 UTC m=+0.110768073 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 23 10:28:59 compute-0 ceph-mon[74335]: pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:28:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:59.783 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:28:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:59.784 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:28:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:28:59.784 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:28:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:59] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Jan 23 10:28:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:28:59] "GET /metrics HTTP/1.1" 200 48543 "" "Prometheus/2.51.0"
Jan 23 10:29:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:00.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:01 compute-0 nova_compute[249229]: 2026-01-23 10:29:01.132 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:01.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:02 compute-0 ceph-mon[74335]: pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:02.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:03 compute-0 nova_compute[249229]: 2026-01-23 10:29:03.257 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:03.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:03 compute-0 ceph-mon[74335]: pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:03.715Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:29:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:03.716Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:04.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:29:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:29:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:05.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:06 compute-0 nova_compute[249229]: 2026-01-23 10:29:06.133 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:06 compute-0 ceph-mon[74335]: pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:29:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:06.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:07 compute-0 sudo[271825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:29:07 compute-0 sudo[271825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:07 compute-0 sudo[271825]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:07.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:07 compute-0 nova_compute[249229]: 2026-01-23 10:29:07.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:07.825Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:29:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:07.827Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:08 compute-0 ceph-mon[74335]: pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:08 compute-0 nova_compute[249229]: 2026-01-23 10:29:08.259 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:08.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:08.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:09.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:29:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:29:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:10.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:10 compute-0 ceph-mon[74335]: pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:10 compute-0 podman[271853]: 2026-01-23 10:29:10.690860219 +0000 UTC m=+0.050607195 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 10:29:10 compute-0 nova_compute[249229]: 2026-01-23 10:29:10.715 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:10 compute-0 nova_compute[249229]: 2026-01-23 10:29:10.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:29:11 compute-0 nova_compute[249229]: 2026-01-23 10:29:11.134 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:11.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:11 compute-0 nova_compute[249229]: 2026-01-23 10:29:11.708 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:11 compute-0 nova_compute[249229]: 2026-01-23 10:29:11.723 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:11 compute-0 nova_compute[249229]: 2026-01-23 10:29:11.723 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:29:11 compute-0 nova_compute[249229]: 2026-01-23 10:29:11.723 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:29:11 compute-0 nova_compute[249229]: 2026-01-23 10:29:11.732 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:29:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:12 compute-0 ceph-mon[74335]: pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:12.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:12 compute-0 nova_compute[249229]: 2026-01-23 10:29:12.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:12 compute-0 nova_compute[249229]: 2026-01-23 10:29:12.743 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:29:12 compute-0 nova_compute[249229]: 2026-01-23 10:29:12.743 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:29:12 compute-0 nova_compute[249229]: 2026-01-23 10:29:12.744 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:29:12 compute-0 nova_compute[249229]: 2026-01-23 10:29:12.744 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:29:12 compute-0 nova_compute[249229]: 2026-01-23 10:29:12.744 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:29:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:29:13 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501464116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:29:13 compute-0 nova_compute[249229]: 2026-01-23 10:29:13.242 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:29:13 compute-0 nova_compute[249229]: 2026-01-23 10:29:13.260 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:13 compute-0 nova_compute[249229]: 2026-01-23 10:29:13.472 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:29:13 compute-0 nova_compute[249229]: 2026-01-23 10:29:13.473 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4593MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:29:13 compute-0 nova_compute[249229]: 2026-01-23 10:29:13.474 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:29:13 compute-0 nova_compute[249229]: 2026-01-23 10:29:13.474 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:29:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:29:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:13.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:29:13 compute-0 nova_compute[249229]: 2026-01-23 10:29:13.578 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:29:13 compute-0 nova_compute[249229]: 2026-01-23 10:29:13.579 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:29:13 compute-0 nova_compute[249229]: 2026-01-23 10:29:13.606 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:29:13 compute-0 ceph-mon[74335]: pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2501464116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:29:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3300512477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:29:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:13.717Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:29:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2920842876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:29:14 compute-0 nova_compute[249229]: 2026-01-23 10:29:14.090 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:29:14 compute-0 nova_compute[249229]: 2026-01-23 10:29:14.096 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:29:14 compute-0 nova_compute[249229]: 2026-01-23 10:29:14.116 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:29:14 compute-0 nova_compute[249229]: 2026-01-23 10:29:14.138 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:29:14 compute-0 nova_compute[249229]: 2026-01-23 10:29:14.138 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:29:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:14.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/423661688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:29:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2920842876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:29:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3441369713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:29:15 compute-0 nova_compute[249229]: 2026-01-23 10:29:15.138 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:15 compute-0 nova_compute[249229]: 2026-01-23 10:29:15.139 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:15.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:16 compute-0 nova_compute[249229]: 2026-01-23 10:29:16.136 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:29:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:16.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:29:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:17 compute-0 ceph-mon[74335]: pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:17.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:17 compute-0 nova_compute[249229]: 2026-01-23 10:29:17.708 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:17.827Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:18 compute-0 nova_compute[249229]: 2026-01-23 10:29:18.262 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:18.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1986486844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:29:18 compute-0 ceph-mon[74335]: pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:18 compute-0 nova_compute[249229]: 2026-01-23 10:29:18.715 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:18.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:19.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:19 compute-0 nova_compute[249229]: 2026-01-23 10:29:19.715 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:29:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:29:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:29:20
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['default.rgw.control', 'images', 'vms', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.log', 'backups', '.mgr']
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:29:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:29:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:29:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:20.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:20 compute-0 ceph-mon[74335]: pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:21 compute-0 nova_compute[249229]: 2026-01-23 10:29:21.138 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:21.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:22 compute-0 ovn_controller[151634]: 2026-01-23T10:29:22Z|00074|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Jan 23 10:29:22 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:29:22 compute-0 ceph-mon[74335]: pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:22.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:23 compute-0 nova_compute[249229]: 2026-01-23 10:29:23.264 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:23 compute-0 ceph-mon[74335]: pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:29:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:23.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:29:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:23.719Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:29:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:24.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:29:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:25.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:26 compute-0 nova_compute[249229]: 2026-01-23 10:29:26.140 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:26 compute-0 ceph-mon[74335]: pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:27 compute-0 sudo[271936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:29:27 compute-0 sudo[271936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:27 compute-0 sudo[271936]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:29:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:27.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:29:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:27.828Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:28 compute-0 ceph-mon[74335]: pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:28 compute-0 nova_compute[249229]: 2026-01-23 10:29:28.264 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:28.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:28.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:29 compute-0 ceph-mon[74335]: pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:29.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:29] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 23 10:29:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:29] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 23 10:29:30 compute-0 podman[271964]: 2026-01-23 10:29:30.556647914 +0000 UTC m=+0.081824866 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 23 10:29:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:30.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:31 compute-0 nova_compute[249229]: 2026-01-23 10:29:31.142 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:31.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:31 compute-0 ceph-mon[74335]: pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:32.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:33 compute-0 nova_compute[249229]: 2026-01-23 10:29:33.265 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:33.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:33.720Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:33 compute-0 ceph-mon[74335]: pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:34.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:29:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:29:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:35.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:35 compute-0 ceph-mon[74335]: pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:29:36 compute-0 nova_compute[249229]: 2026-01-23 10:29:36.144 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:36.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:37.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:37 compute-0 sudo[271998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:29:37 compute-0 sudo[271998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:37 compute-0 sudo[271998]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:37.829Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:37 compute-0 sudo[272023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 23 10:29:37 compute-0 sudo[272023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:37 compute-0 ceph-mon[74335]: pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:38 compute-0 sudo[272023]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:29:38 compute-0 nova_compute[249229]: 2026-01-23 10:29:38.268 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:29:38 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:38 compute-0 sudo[272068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:29:38 compute-0 sudo[272068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:38 compute-0 sudo[272068]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:38 compute-0 sudo[272093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:29:38 compute-0 sudo[272093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:38.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:38.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:39 compute-0 sudo[272093]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:29:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:29:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:29:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:29:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:29:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:29:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:29:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:29:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:29:39 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:29:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:29:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:29:39 compute-0 sudo[272152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:29:39 compute-0 sudo[272152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:39 compute-0 sudo[272152]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:39 compute-0 sudo[272177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:29:39 compute-0 sudo[272177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:39 compute-0 ceph-mon[74335]: pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:29:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:29:39 compute-0 ceph-mon[74335]: pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:29:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:29:39 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:29:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:39.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:39 compute-0 podman[272242]: 2026-01-23 10:29:39.640022567 +0000 UTC m=+0.039162859 container create 9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:29:39 compute-0 systemd[1]: Started libpod-conmon-9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a.scope.
Jan 23 10:29:39 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:29:39 compute-0 podman[272242]: 2026-01-23 10:29:39.622562669 +0000 UTC m=+0.021702971 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:29:39 compute-0 podman[272242]: 2026-01-23 10:29:39.725226869 +0000 UTC m=+0.124367181 container init 9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_newton, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:29:39 compute-0 podman[272242]: 2026-01-23 10:29:39.739922968 +0000 UTC m=+0.139063260 container start 9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 10:29:39 compute-0 podman[272242]: 2026-01-23 10:29:39.742951504 +0000 UTC m=+0.142091796 container attach 9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 23 10:29:39 compute-0 eager_newton[272258]: 167 167
Jan 23 10:29:39 compute-0 systemd[1]: libpod-9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a.scope: Deactivated successfully.
Jan 23 10:29:39 compute-0 conmon[272258]: conmon 9d58c77543835bfdc255 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a.scope/container/memory.events
Jan 23 10:29:39 compute-0 podman[272242]: 2026-01-23 10:29:39.749082689 +0000 UTC m=+0.148222981 container died 9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_newton, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 10:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-13b4f5a9a93c3cf469e214366f67ae91d8d7e7899edce3f66bc59fa2a2bb9d54-merged.mount: Deactivated successfully.
Jan 23 10:29:39 compute-0 podman[272242]: 2026-01-23 10:29:39.784334946 +0000 UTC m=+0.183475238 container remove 9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_newton, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:29:39 compute-0 systemd[1]: libpod-conmon-9d58c77543835bfdc255701bcbc55245655e75cf8a4d989acb52a58855f62a5a.scope: Deactivated successfully.
Jan 23 10:29:39 compute-0 podman[272285]: 2026-01-23 10:29:39.961719418 +0000 UTC m=+0.053743975 container create 547c9a7b055e43bbe910521f6df5c19d7c180e952c9fa77498b4368b204475cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 23 10:29:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:39] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:29:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:39] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:29:40 compute-0 systemd[1]: Started libpod-conmon-547c9a7b055e43bbe910521f6df5c19d7c180e952c9fa77498b4368b204475cb.scope.
Jan 23 10:29:40 compute-0 podman[272285]: 2026-01-23 10:29:39.933283036 +0000 UTC m=+0.025307693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:29:40 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72e2d1454cb3e80c8f99d900bff09cfb1818b54688ee56e122efdecba4dc92eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72e2d1454cb3e80c8f99d900bff09cfb1818b54688ee56e122efdecba4dc92eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72e2d1454cb3e80c8f99d900bff09cfb1818b54688ee56e122efdecba4dc92eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72e2d1454cb3e80c8f99d900bff09cfb1818b54688ee56e122efdecba4dc92eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72e2d1454cb3e80c8f99d900bff09cfb1818b54688ee56e122efdecba4dc92eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:40 compute-0 podman[272285]: 2026-01-23 10:29:40.070346978 +0000 UTC m=+0.162371545 container init 547c9a7b055e43bbe910521f6df5c19d7c180e952c9fa77498b4368b204475cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:29:40 compute-0 podman[272285]: 2026-01-23 10:29:40.07743303 +0000 UTC m=+0.169457587 container start 547c9a7b055e43bbe910521f6df5c19d7c180e952c9fa77498b4368b204475cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_booth, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:29:40 compute-0 podman[272285]: 2026-01-23 10:29:40.081913458 +0000 UTC m=+0.173938015 container attach 547c9a7b055e43bbe910521f6df5c19d7c180e952c9fa77498b4368b204475cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 10:29:40 compute-0 stupefied_booth[272302]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:29:40 compute-0 stupefied_booth[272302]: --> All data devices are unavailable
Jan 23 10:29:40 compute-0 systemd[1]: libpod-547c9a7b055e43bbe910521f6df5c19d7c180e952c9fa77498b4368b204475cb.scope: Deactivated successfully.
Jan 23 10:29:40 compute-0 podman[272285]: 2026-01-23 10:29:40.413709778 +0000 UTC m=+0.505734325 container died 547c9a7b055e43bbe910521f6df5c19d7c180e952c9fa77498b4368b204475cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:29:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-72e2d1454cb3e80c8f99d900bff09cfb1818b54688ee56e122efdecba4dc92eb-merged.mount: Deactivated successfully.
Jan 23 10:29:40 compute-0 podman[272285]: 2026-01-23 10:29:40.452575617 +0000 UTC m=+0.544600174 container remove 547c9a7b055e43bbe910521f6df5c19d7c180e952c9fa77498b4368b204475cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 10:29:40 compute-0 systemd[1]: libpod-conmon-547c9a7b055e43bbe910521f6df5c19d7c180e952c9fa77498b4368b204475cb.scope: Deactivated successfully.
Jan 23 10:29:40 compute-0 sudo[272177]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:40 compute-0 sudo[272327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:29:40 compute-0 sudo[272327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:40 compute-0 sudo[272327]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:40 compute-0 sudo[272352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:29:40 compute-0 sudo[272352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:40.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:41 compute-0 podman[272414]: 2026-01-23 10:29:41.009751308 +0000 UTC m=+0.056825282 container create 613dbd8d3e0e970a1caae3db4e36d68dd9da0f9a474d94e94a5b9924744a1cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:29:41 compute-0 systemd[1]: Started libpod-conmon-613dbd8d3e0e970a1caae3db4e36d68dd9da0f9a474d94e94a5b9924744a1cbe.scope.
Jan 23 10:29:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:29:41 compute-0 podman[272414]: 2026-01-23 10:29:41.063395669 +0000 UTC m=+0.110469663 container init 613dbd8d3e0e970a1caae3db4e36d68dd9da0f9a474d94e94a5b9924744a1cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 10:29:41 compute-0 podman[272414]: 2026-01-23 10:29:41.073736615 +0000 UTC m=+0.120810589 container start 613dbd8d3e0e970a1caae3db4e36d68dd9da0f9a474d94e94a5b9924744a1cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_chatelet, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:29:41 compute-0 podman[272414]: 2026-01-23 10:29:41.076792492 +0000 UTC m=+0.123866506 container attach 613dbd8d3e0e970a1caae3db4e36d68dd9da0f9a474d94e94a5b9924744a1cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:29:41 compute-0 nifty_chatelet[272431]: 167 167
Jan 23 10:29:41 compute-0 systemd[1]: libpod-613dbd8d3e0e970a1caae3db4e36d68dd9da0f9a474d94e94a5b9924744a1cbe.scope: Deactivated successfully.
Jan 23 10:29:41 compute-0 podman[272414]: 2026-01-23 10:29:41.082309959 +0000 UTC m=+0.129383933 container died 613dbd8d3e0e970a1caae3db4e36d68dd9da0f9a474d94e94a5b9924744a1cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:29:41 compute-0 podman[272414]: 2026-01-23 10:29:40.992136986 +0000 UTC m=+0.039210990 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:29:41 compute-0 podman[272428]: 2026-01-23 10:29:41.099300634 +0000 UTC m=+0.055960868 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 10:29:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 875 B/s rd, 0 op/s
Jan 23 10:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7eeb687cf6204d98be1112967906f7f4320a26fe335216f414312fe3f5c7ef5-merged.mount: Deactivated successfully.
Jan 23 10:29:41 compute-0 podman[272414]: 2026-01-23 10:29:41.122240609 +0000 UTC m=+0.169314583 container remove 613dbd8d3e0e970a1caae3db4e36d68dd9da0f9a474d94e94a5b9924744a1cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:29:41 compute-0 systemd[1]: libpod-conmon-613dbd8d3e0e970a1caae3db4e36d68dd9da0f9a474d94e94a5b9924744a1cbe.scope: Deactivated successfully.
Jan 23 10:29:41 compute-0 nova_compute[249229]: 2026-01-23 10:29:41.146 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:41 compute-0 podman[272471]: 2026-01-23 10:29:41.297784449 +0000 UTC m=+0.052117749 container create 0308b85501e808aa5c9665d3a0acd4740e4f3d25b05b81115f442ed19346cd77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:29:41 compute-0 systemd[1]: Started libpod-conmon-0308b85501e808aa5c9665d3a0acd4740e4f3d25b05b81115f442ed19346cd77.scope.
Jan 23 10:29:41 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:29:41 compute-0 podman[272471]: 2026-01-23 10:29:41.274859534 +0000 UTC m=+0.029192844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b20c2f9be069177f9bf165e8c028846def44c3f5b613ce95bdbe1622de8eb3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b20c2f9be069177f9bf165e8c028846def44c3f5b613ce95bdbe1622de8eb3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b20c2f9be069177f9bf165e8c028846def44c3f5b613ce95bdbe1622de8eb3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b20c2f9be069177f9bf165e8c028846def44c3f5b613ce95bdbe1622de8eb3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:41 compute-0 podman[272471]: 2026-01-23 10:29:41.385602295 +0000 UTC m=+0.139935575 container init 0308b85501e808aa5c9665d3a0acd4740e4f3d25b05b81115f442ed19346cd77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 10:29:41 compute-0 podman[272471]: 2026-01-23 10:29:41.396685501 +0000 UTC m=+0.151018761 container start 0308b85501e808aa5c9665d3a0acd4740e4f3d25b05b81115f442ed19346cd77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:29:41 compute-0 podman[272471]: 2026-01-23 10:29:41.400333845 +0000 UTC m=+0.154667155 container attach 0308b85501e808aa5c9665d3a0acd4740e4f3d25b05b81115f442ed19346cd77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_goodall, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:29:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:41.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:41 compute-0 stoic_goodall[272487]: {
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:     "1": [
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:         {
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "devices": [
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "/dev/loop3"
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             ],
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "lv_name": "ceph_lv0",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "lv_size": "21470642176",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "name": "ceph_lv0",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "tags": {
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.cluster_name": "ceph",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.crush_device_class": "",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.encrypted": "0",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.osd_id": "1",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.type": "block",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.vdo": "0",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:                 "ceph.with_tpm": "0"
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             },
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "type": "block",
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:             "vg_name": "ceph_vg0"
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:         }
Jan 23 10:29:41 compute-0 stoic_goodall[272487]:     ]
Jan 23 10:29:41 compute-0 stoic_goodall[272487]: }
Jan 23 10:29:41 compute-0 systemd[1]: libpod-0308b85501e808aa5c9665d3a0acd4740e4f3d25b05b81115f442ed19346cd77.scope: Deactivated successfully.
Jan 23 10:29:41 compute-0 podman[272471]: 2026-01-23 10:29:41.712477544 +0000 UTC m=+0.466810824 container died 0308b85501e808aa5c9665d3a0acd4740e4f3d25b05b81115f442ed19346cd77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b20c2f9be069177f9bf165e8c028846def44c3f5b613ce95bdbe1622de8eb3a-merged.mount: Deactivated successfully.
Jan 23 10:29:41 compute-0 podman[272471]: 2026-01-23 10:29:41.763894431 +0000 UTC m=+0.518227701 container remove 0308b85501e808aa5c9665d3a0acd4740e4f3d25b05b81115f442ed19346cd77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:29:41 compute-0 systemd[1]: libpod-conmon-0308b85501e808aa5c9665d3a0acd4740e4f3d25b05b81115f442ed19346cd77.scope: Deactivated successfully.
Jan 23 10:29:41 compute-0 sudo[272352]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:41 compute-0 sudo[272509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:29:41 compute-0 sudo[272509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:41 compute-0 sudo[272509]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:29:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 2975 syncs, 3.79 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1946 writes, 6900 keys, 1946 commit groups, 1.0 writes per commit group, ingest: 8.66 MB, 0.01 MB/s
                                           Interval WAL: 1946 writes, 808 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 10:29:41 compute-0 sudo[272535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:29:41 compute-0 sudo[272535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:42 compute-0 ceph-mon[74335]: pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 875 B/s rd, 0 op/s
Jan 23 10:29:42 compute-0 podman[272600]: 2026-01-23 10:29:42.378612065 +0000 UTC m=+0.042726020 container create 6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:29:42 compute-0 systemd[1]: Started libpod-conmon-6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906.scope.
Jan 23 10:29:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:29:42 compute-0 podman[272600]: 2026-01-23 10:29:42.35986431 +0000 UTC m=+0.023978295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:29:42 compute-0 podman[272600]: 2026-01-23 10:29:42.456900169 +0000 UTC m=+0.121014174 container init 6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:29:42 compute-0 podman[272600]: 2026-01-23 10:29:42.46288587 +0000 UTC m=+0.126999825 container start 6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:29:42 compute-0 podman[272600]: 2026-01-23 10:29:42.465793643 +0000 UTC m=+0.129907598 container attach 6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:29:42 compute-0 systemd[1]: libpod-6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906.scope: Deactivated successfully.
Jan 23 10:29:42 compute-0 priceless_germain[272617]: 167 167
Jan 23 10:29:42 compute-0 conmon[272617]: conmon 6632b9c6f995b81e374e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906.scope/container/memory.events
Jan 23 10:29:42 compute-0 podman[272600]: 2026-01-23 10:29:42.469374005 +0000 UTC m=+0.133487970 container died 6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f897ad8241a7d2dba6b5676f773e5e3b006caeeea5d1ec6ce87df044aa93eabf-merged.mount: Deactivated successfully.
Jan 23 10:29:42 compute-0 podman[272600]: 2026-01-23 10:29:42.502156471 +0000 UTC m=+0.166270416 container remove 6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:29:42 compute-0 systemd[1]: libpod-conmon-6632b9c6f995b81e374eb91785f22f9a2ff1c75f72a15b6be00b0738a5fd9906.scope: Deactivated successfully.
Jan 23 10:29:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:42.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:42 compute-0 podman[272644]: 2026-01-23 10:29:42.699152173 +0000 UTC m=+0.063510283 container create 603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mestorf, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 10:29:42 compute-0 systemd[1]: Started libpod-conmon-603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd.scope.
Jan 23 10:29:42 compute-0 podman[272644]: 2026-01-23 10:29:42.672781721 +0000 UTC m=+0.037139881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:29:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c5c935e8348125c963e19a0d6c2332c8cd70e43d87d979c79505d957630855/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c5c935e8348125c963e19a0d6c2332c8cd70e43d87d979c79505d957630855/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c5c935e8348125c963e19a0d6c2332c8cd70e43d87d979c79505d957630855/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42c5c935e8348125c963e19a0d6c2332c8cd70e43d87d979c79505d957630855/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:29:42 compute-0 podman[272644]: 2026-01-23 10:29:42.794160275 +0000 UTC m=+0.158518405 container init 603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mestorf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 10:29:42 compute-0 podman[272644]: 2026-01-23 10:29:42.806164937 +0000 UTC m=+0.170523067 container start 603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 23 10:29:42 compute-0 podman[272644]: 2026-01-23 10:29:42.809797511 +0000 UTC m=+0.174155641 container attach 603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 23 10:29:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:43 compute-0 nova_compute[249229]: 2026-01-23 10:29:43.269 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:43 compute-0 lvm[272734]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:29:43 compute-0 lvm[272734]: VG ceph_vg0 finished
Jan 23 10:29:43 compute-0 charming_mestorf[272660]: {}
Jan 23 10:29:43 compute-0 systemd[1]: libpod-603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd.scope: Deactivated successfully.
Jan 23 10:29:43 compute-0 podman[272644]: 2026-01-23 10:29:43.521897553 +0000 UTC m=+0.886255683 container died 603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:29:43 compute-0 systemd[1]: libpod-603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd.scope: Consumed 1.129s CPU time.
Jan 23 10:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-42c5c935e8348125c963e19a0d6c2332c8cd70e43d87d979c79505d957630855-merged.mount: Deactivated successfully.
Jan 23 10:29:43 compute-0 podman[272644]: 2026-01-23 10:29:43.560043252 +0000 UTC m=+0.924401362 container remove 603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mestorf, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:29:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:43.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:43 compute-0 systemd[1]: libpod-conmon-603fe9fa87fba3eec7cc5b65856fec0afbf1e4a0a3053e089a43ee720ced7efd.scope: Deactivated successfully.
Jan 23 10:29:43 compute-0 sudo[272535]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:29:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:43.721Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:43 compute-0 ceph-mon[74335]: pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:29:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:44.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:44 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:44 compute-0 sudo[272750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:29:44 compute-0 sudo[272750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:44 compute-0 sudo[272750]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:45 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:29:45 compute-0 ceph-mon[74335]: pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:29:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:29:46 compute-0 nova_compute[249229]: 2026-01-23 10:29:46.175 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:46.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:47 compute-0 sudo[272777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:29:47 compute-0 sudo[272777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:29:47 compute-0 sudo[272777]: pam_unix(sudo:session): session closed for user root
Jan 23 10:29:47 compute-0 ceph-mon[74335]: pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:47.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:47.831Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:48 compute-0 nova_compute[249229]: 2026-01-23 10:29:48.271 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:48.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2022542434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:29:48 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2022542434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:29:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:48.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:29:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:49.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:49 compute-0 ceph-mon[74335]: pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 583 B/s rd, 0 op/s
Jan 23 10:29:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:49] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:29:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:49] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:29:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:29:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:29:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:29:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:29:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:29:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:29:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:29:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:29:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:50.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:29:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:51 compute-0 nova_compute[249229]: 2026-01-23 10:29:51.179 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:51.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:51 compute-0 ceph-mon[74335]: pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:51 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:52.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:53 compute-0 nova_compute[249229]: 2026-01-23 10:29:53.273 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:53.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:53.723Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:29:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:53.723Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:29:54 compute-0 ceph-mon[74335]: pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:29:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:54.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:29:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:55.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:56 compute-0 nova_compute[249229]: 2026-01-23 10:29:56.183 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:56 compute-0 ceph-mon[74335]: pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:56.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:56 compute-0 sshd-session[272812]: Accepted publickey for zuul from 192.168.122.10 port 59636 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:29:56 compute-0 systemd-logind[784]: New session 56 of user zuul.
Jan 23 10:29:56 compute-0 systemd[1]: Started Session 56 of User zuul.
Jan 23 10:29:56 compute-0 sshd-session[272812]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:29:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:29:57 compute-0 sudo[272816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 23 10:29:57 compute-0 sudo[272816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:29:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:57.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:57.834Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:29:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:57.833Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:29:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:57.837Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:29:58 compute-0 ceph-mon[74335]: pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:29:58 compute-0 nova_compute[249229]: 2026-01-23 10:29:58.275 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:29:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:29:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:58.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:29:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:58.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:29:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:29:58.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:29:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:29:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:29:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:29:59 compute-0 ceph-mon[74335]: pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:29:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:29:59.784 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:29:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:29:59.785 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:29:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:29:59.785 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:29:59 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25769 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:29:59 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16287 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:29:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:59] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:29:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:29:59] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:30:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 2 failed cephadm daemon(s)
Jan 23 10:30:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:30:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Jan 23 10:30:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] :      osd.2 observed slow operation indications in BlueStore
Jan 23 10:30:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Jan 23 10:30:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.fenqiu on compute-0 is in error state
Jan 23 10:30:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.tykohi on compute-2 is in error state
Jan 23 10:30:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25807 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25781 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16296 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:00.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25819 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:00 compute-0 ceph-mon[74335]: from='client.25769 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:00 compute-0 ceph-mon[74335]: from='client.16287 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:00 compute-0 ceph-mon[74335]: Health detail: HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 2 failed cephadm daemon(s)
Jan 23 10:30:00 compute-0 ceph-mon[74335]: [WRN] BLUESTORE_SLOW_OP_ALERT: 2 OSD(s) experiencing slow operations in BlueStore
Jan 23 10:30:00 compute-0 ceph-mon[74335]:      osd.1 observed slow operation indications in BlueStore
Jan 23 10:30:00 compute-0 ceph-mon[74335]:      osd.2 observed slow operation indications in BlueStore
Jan 23 10:30:00 compute-0 ceph-mon[74335]: [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Jan 23 10:30:00 compute-0 ceph-mon[74335]:     daemon nfs.cephfs.2.0.compute-0.fenqiu on compute-0 is in error state
Jan 23 10:30:00 compute-0 ceph-mon[74335]:     daemon nfs.cephfs.1.0.compute-2.tykohi on compute-2 is in error state
Jan 23 10:30:00 compute-0 ceph-mon[74335]: from='client.25807 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 23 10:30:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737233307' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:30:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:01 compute-0 nova_compute[249229]: 2026-01-23 10:30:01.186 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:01 compute-0 podman[273067]: 2026-01-23 10:30:01.564501935 +0000 UTC m=+0.092255224 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 10:30:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:01.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:01 compute-0 ceph-mon[74335]: from='client.25781 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:01 compute-0 ceph-mon[74335]: from='client.16296 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:01 compute-0 ceph-mon[74335]: from='client.25819 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2737233307' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:30:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1970715676' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:30:01 compute-0 ceph-mon[74335]: pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2989331378' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:30:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:02.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:03 compute-0 nova_compute[249229]: 2026-01-23 10:30:03.277 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:03.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:03 compute-0 ceph-mon[74335]: pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:03.724Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:04.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:05 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 23 10:30:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:30:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:30:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:05.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:30:06 compute-0 ovs-vsctl[273211]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 23 10:30:06 compute-0 nova_compute[249229]: 2026-01-23 10:30:06.188 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:06.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:06 compute-0 ceph-mon[74335]: pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:07 compute-0 virtqemud[248554]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 23 10:30:07 compute-0 virtqemud[248554]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 23 10:30:07 compute-0 virtqemud[248554]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 10:30:07 compute-0 sudo[273363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:30:07 compute-0 sudo[273363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:07 compute-0 sudo[273363]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:07.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:07 compute-0 nova_compute[249229]: 2026-01-23 10:30:07.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:30:07 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: cache status {prefix=cache status} (starting...)
Jan 23 10:30:07 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:07.838Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:30:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:07.839Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:30:07 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: client ls {prefix=client ls} (starting...)
Jan 23 10:30:07 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:08 compute-0 lvm[273595]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:30:08 compute-0 lvm[273595]: VG ceph_vg0 finished
Jan 23 10:30:08 compute-0 ceph-mon[74335]: pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:08 compute-0 nova_compute[249229]: 2026-01-23 10:30:08.279 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16308 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:08 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: damage ls {prefix=damage ls} (starting...)
Jan 23 10:30:08 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:08.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25793 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 23 10:30:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265376060' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:30:08 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump loads {prefix=dump loads} (starting...)
Jan 23 10:30:08 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:08.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:08 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 23 10:30:08 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 23 10:30:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16320 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25808 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2265376060' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3709761161' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mon[74335]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:30:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/858010683' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16332 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25823 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25843 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:09.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 23 10:30:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3912976921' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 23 10:30:09 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16347 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25841 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:09] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:30:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:09] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:30:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25855 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: ops {prefix=ops} (starting...)
Jan 23 10:30:10 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 23 10:30:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513163500' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 23 10:30:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 23 10:30:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236970046' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25867 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:10.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25862 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16383 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 23 10:30:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085471352' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: session ls {prefix=session ls} (starting...)
Jan 23 10:30:10 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:30:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25882 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: from='client.16308 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: from='client.25793 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: from='client.16320 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: from='client.25808 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/858010683' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2096205967' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3912976921' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 10:30:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2048205722' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 10:30:11 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: status {prefix=status} (starting...)
Jan 23 10:30:11 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25883 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:11 compute-0 nova_compute[249229]: 2026-01-23 10:30:11.212 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 23 10:30:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:30:11 compute-0 podman[274013]: 2026-01-23 10:30:11.550596112 +0000 UTC m=+0.065544882 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:30:11 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16395 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:11.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 23 10:30:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621193792' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 23 10:30:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427895171' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 23 10:30:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2342214168' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25937 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T10:30:12.377+0000 7f28655d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:30:12 compute-0 ceph-mgr[74633]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:30:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:12.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.16332 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.25823 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.25843 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.16347 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.25841 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.25855 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1500381436' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2632100089' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1513163500' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1236970046' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/604163341' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.25867 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.25862 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.16383 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/824697158' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1085471352' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.25882 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.25883 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/447049587' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3358390859' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/661403047' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/621193792' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2288603447' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3873136511' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:30:12 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2183856151' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:30:12 compute-0 nova_compute[249229]: 2026-01-23 10:30:12.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:30:12 compute-0 nova_compute[249229]: 2026-01-23 10:30:12.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:30:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.281 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 10:30:13 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1820403701' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 23 10:30:13 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489607522' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25927 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:13.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.16395 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2326913399' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1159080727' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2427895171' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2342214168' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2631441634' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.25937 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/728729314' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3971325804' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/591404194' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1820403701' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1489607522' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1309114209' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2510791750' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2372642057' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 10:30:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:13.725Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:13 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16452 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T10:30:13.737+0000 7f28655d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:30:13 compute-0 ceph-mgr[74633]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:30:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 23 10:30:13 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261526027' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.781 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.782 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.782 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.818 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.819 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.819 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.820 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:30:13 compute-0 nova_compute[249229]: 2026-01-23 10:30:13.821 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:30:13 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25936 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 23 10:30:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156927190' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 23 10:30:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699757616' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25985 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:30:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4136571834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:30:14 compute-0 nova_compute[249229]: 2026-01-23 10:30:14.348 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:30:14 compute-0 nova_compute[249229]: 2026-01-23 10:30:14.517 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:30:14 compute-0 nova_compute[249229]: 2026-01-23 10:30:14.519 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4377MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:30:14 compute-0 nova_compute[249229]: 2026-01-23 10:30:14.519 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:30:14 compute-0 nova_compute[249229]: 2026-01-23 10:30:14.520 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:30:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 23 10:30:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:30:14 compute-0 nova_compute[249229]: 2026-01-23 10:30:14.633 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:30:14 compute-0 nova_compute[249229]: 2026-01-23 10:30:14.633 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:30:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:14.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:14 compute-0 nova_compute[249229]: 2026-01-23 10:30:14.649 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:30:14 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16494 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26003 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 23 10:30:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571602394' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:12.597594+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 319488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:13.597830+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 319488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927091 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:14.598100+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 311296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:15.598341+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 311296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:16.598703+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 303104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:17.598922+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 294912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:18.599143+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 294912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927091 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:19.599370+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 286720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:20.599704+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 294912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:21.599938+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 286720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:22.600155+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:23.600468+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927091 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:24.600708+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 278528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:25.600937+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:26.601227+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 270336 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:27.601477+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 262144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:28.601732+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 262144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927091 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:29.602033+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:30.602281+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:31.602620+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 253952 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:32.602885+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 245760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:33.603144+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8839000 session 0x55c0a9440b40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 245760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927091 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:34.603316+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 245760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:35.603542+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 237568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:36.604415+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 237568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:37.604613+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 229376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:38.604775+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 229376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927091 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:39.604977+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 229376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:40.605214+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 221184 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:41.605458+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 212992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:42.605669+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 204800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:43.605910+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 204800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927091 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:44.606073+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 196608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 68.469787598s of 68.497421265s, submitted: 5
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:45.606232+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 196608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:46.606559+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 196608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:47.606731+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 188416 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:48.607101+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 188416 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927239 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:49.607299+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 188416 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:50.607441+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 180224 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:51.607632+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 180224 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:52.607790+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 172032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:53.607961+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 172032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927239 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:54.608318+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 172032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:55.608494+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.727671623s of 10.194524765s, submitted: 9
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 163840 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:56.609098+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 163840 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:57.609260+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 163840 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:58.609754+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 139264 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926041 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:57:59.609945+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 139264 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:00.610106+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 131072 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:01.610340+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 122880 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:02.610626+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 122880 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:03.610928+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 114688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:04.611078+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 114688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:05.611308+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 114688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:06.612087+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 98304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:07.612439+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 98304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:08.612675+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 90112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:09.613051+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 90112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:10.613254+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 81920 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:11.613673+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 81920 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:12.613855+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 81920 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:13.614014+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 73728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:14.614169+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 73728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:15.614334+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 73728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:16.614610+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:17.614884+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:18.615073+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 57344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:19.615303+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 57344 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:20.615576+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 49152 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:21.615805+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 49152 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:22.615992+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 49152 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8838800 session 0x55c0a88732c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:23.616209+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:24.616381+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:25.616602+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 32768 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:26.616894+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 24576 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:27.617150+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 16384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:28.617459+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 16384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:29.617645+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 16384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:30.617855+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 8192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:31.618025+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 0 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:32.618220+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:33.618448+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b35400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.165973663s of 38.297725677s, submitted: 3
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926041 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:34.618715+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:35.618880+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:36.619257+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:37.619457+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:38.619670+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926057 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:39.619893+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:40.620076+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:41.620270+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:42.620480+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:43.620628+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:44.621241+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926057 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:45.621452+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:46.621697+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:47.621873+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:48.622037+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:49.622192+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926057 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.563409805s of 16.610542297s, submitted: 10
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:50.622397+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:51.622593+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:52.622831+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 983040 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:53.623083+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:54.623259+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:55.623476+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:56.623728+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:57.623909+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 966656 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:58.624124+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:58:59.624293+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:00.624446+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:01.624698+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:02.624922+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8b35400 session 0x55c0a94c2000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:03.625453+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:04.625672+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:05.625896+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 933888 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:06.626178+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:07.626539+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:08.626788+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:09.627036+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:10.627270+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:11.627552+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 7205 writes, 30K keys, 7205 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 7205 writes, 1228 syncs, 5.87 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7205 writes, 30K keys, 7205 commit groups, 1.0 writes per commit group, ingest: 20.49 MB, 0.03 MB/s
                                           Interval WAL: 7205 writes, 1228 syncs, 5.87 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:12.627710+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 843776 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:13.627848+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:14.627999+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925909 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 835584 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6cbe400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.530122757s of 24.689783096s, submitted: 1
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:15.628172+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 819200 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:16.628414+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:17.628607+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:18.628796+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 794624 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:19.629004+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927569 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 794624 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:20.629252+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8ad3800 session 0x55c0a87010e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 794624 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:21.629449+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:22.629645+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:23.629893+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:24.630137+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927569 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 761856 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:25.630333+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 753664 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:26.630637+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 753664 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:27.630854+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:28.631049+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.500973701s of 13.690242767s, submitted: 9
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 737280 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:29.631258+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927269 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 729088 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:30.631461+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 729088 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:31.632061+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 720896 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:32.632215+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 729088 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:33.632620+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 729088 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:34.632831+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927553 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 720896 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:35.633032+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 720896 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:36.633290+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 720896 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:37.633511+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 712704 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:38.633769+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.018385887s of 10.066536903s, submitted: 9
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 712704 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:39.634033+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929081 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 712704 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:40.634261+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 704512 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:41.634478+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 696320 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:42.634672+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:43.634918+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:44.635126+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928474 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 679936 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:45.635462+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:46.635798+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 688128 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:47.636001+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 679936 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:48.636274+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 679936 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:49.636461+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 679936 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:50.636710+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:51.637520+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 671744 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:52.637704+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 663552 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:53.637924+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 663552 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:54.638135+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:55.638702+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 655360 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:56.639021+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:57.639183+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:58.639445+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 647168 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T09:59:59.639633+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 638976 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:00.639842+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 638976 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:01.639980+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 638976 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:02.640137+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 630784 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:03.640300+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 630784 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:04.640551+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 622592 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:05.641011+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 622592 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:06.641216+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 622592 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:07.641443+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 614400 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:08.641591+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 614400 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:09.641819+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 614400 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:10.641951+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 606208 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:11.642136+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 606208 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:12.642287+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 598016 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:13.642527+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 598016 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:14.642682+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 598016 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:15.642840+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 589824 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:16.643018+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 589824 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:17.643192+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:18.643361+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 581632 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:19.643571+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 581632 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:20.643730+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 581632 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:21.643884+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 573440 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:22.644224+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 573440 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:23.644415+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 565248 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:24.644619+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 565248 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:25.644796+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 557056 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:26.645006+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 557056 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:27.645209+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:28.645417+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:29.645565+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:30.645759+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 548864 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:31.646136+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 540672 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:32.646319+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 532480 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:33.646536+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 532480 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:34.646714+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 532480 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:35.646931+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 524288 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:36.647218+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 524288 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:37.647447+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 524288 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:38.647637+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 516096 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:39.647780+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 516096 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:40.647995+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 507904 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:41.648158+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 507904 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:42.648380+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 499712 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:43.648586+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 499712 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:44.648727+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 499712 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:45.648899+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 491520 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:46.649178+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 491520 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:47.649449+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 491520 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000087s
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:48.649620+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 483328 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:49.649831+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8838800 session 0x55c0a78cd860
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 483328 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:50.650028+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 475136 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:51.650335+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 475136 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:52.650512+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 466944 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:53.650685+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 466944 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:54.650850+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 458752 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:55.650990+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 450560 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:56.651238+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 450560 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:57.651428+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 450560 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:58.651586+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 442368 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:00:59.651783+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 442368 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928342 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:00.651920+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 81.599395752s of 81.615783691s, submitted: 4
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 442368 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:01.652086+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 434176 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:02.652268+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 434176 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:03.652439+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 425984 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:04.652729+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 393216 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930002 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:05.652898+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 393216 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:06.653149+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 385024 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:07.653414+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 385024 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:08.653724+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 376832 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:09.653918+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 368640 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929834 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:10.654103+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.769923687s of 10.019312859s, submitted: 10
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 360448 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:11.654311+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 352256 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:12.654529+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 352256 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:13.654748+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:14.654959+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929395 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:15.655205+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:16.655620+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:17.655818+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:18.656078+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:19.656267+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929263 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:20.656418+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:21.656628+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:22.656799+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:23.656970+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:24.657158+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929263 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:25.657309+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:26.657605+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a6cbe400 session 0x55c0a94c2d20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:27.657794+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:28.658042+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:29.658230+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 344064 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929263 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:30.658411+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 335872 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:31.658577+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 335872 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:32.658723+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 335872 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:33.658941+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 335872 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:34.659121+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 335872 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929263 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:35.659278+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 335872 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:36.659509+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 335872 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:37.659813+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 327680 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.352416992s of 27.501325607s, submitted: 3
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:38.659982+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 327680 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:39.660173+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 327680 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929395 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:40.660380+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 327680 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:41.660629+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 311296 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:42.660835+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 311296 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:43.661237+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 311296 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:44.661448+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 303104 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930923 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:45.661596+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 294912 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:46.661886+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 294912 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:47.662103+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:48.662471+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:49.662761+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930164 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:50.662963+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:51.663211+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:52.663420+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:53.663596+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:54.663842+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.339046478s of 17.007486343s, submitted: 11
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930184 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:55.664041+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:56.664647+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:57.664865+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:58.665090+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:01:59.665268+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930184 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:00.665436+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:01.665656+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:02.665855+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:03.666001+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8839000 session 0x55c0a8873a40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [1])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 278528 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:04.666164+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.609205246s of 10.012327194s, submitted: 71
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 204800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930400 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:05.666367+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1130496 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:06.666613+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 1081344 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:07.666814+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 983040 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:08.666961+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 983040 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:09.667076+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 983040 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930184 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:10.667233+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 966656 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:11.667463+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 966656 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:12.667705+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 958464 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:13.667847+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:14.668411+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.264580250s of 10.046097755s, submitted: 141
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930388 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:15.668588+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:16.669463+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:17.669659+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:18.669839+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:19.670013+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933188 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:20.670210+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:21.670455+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:22.670724+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:23.670990+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:24.671199+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,1])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.867183685s of 10.304840088s, submitted: 13
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:25.671324+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933356 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:26.671560+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:27.671840+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:28.672036+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:29.672221+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:30.672467+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:31.672645+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 917504 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:32.672824+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:33.673007+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:34.673205+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:35.673444+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:36.673639+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:37.673832+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:38.674111+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:39.674276+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:40.674459+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:41.674671+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:42.674885+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:43.675101+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:44.675266+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:45.675427+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:46.675639+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:47.675833+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:48.676004+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:49.676206+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:50.676343+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:51.676497+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:52.676640+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:53.676800+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:54.676949+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:55.677097+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:56.677290+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:57.677421+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:58.677569+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:02:59.677775+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:00.677940+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:01.678160+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:02.678392+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:03.678571+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:04.678723+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:05.678887+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:06.679142+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:07.679332+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:08.679562+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:09.679707+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:10.679863+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:11.680031+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:12.680174+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:13.680387+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:14.680563+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:15.680733+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:16.680933+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:17.681105+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:18.681370+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8714800 session 0x55c0a9440960
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:19.681560+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:20.681738+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:21.682407+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:22.682690+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:23.683251+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a8873e00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:24.683625+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:25.684518+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932617 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:26.684950+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:27.685131+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:28.685592+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:29.685802+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6cbe400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 64.151512146s of 64.354057312s, submitted: 2
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:30.686022+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932749 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:31.686183+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:32.686471+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:33.686663+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:34.686913+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:35.687316+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932897 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:36.687751+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:37.688079+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:38.688485+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:39.688788+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:40.689067+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932897 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:41.689326+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.822875023s of 12.261339188s, submitted: 10
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:42.689491+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:43.689771+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:44.689961+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:45.690216+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932158 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:46.690488+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:47.690707+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 942080 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:48.690916+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:49.691144+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 925696 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:50.691329+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 909312 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:51.691538+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:52.691702+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:53.691907+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:54.692081+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:55.692260+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:56.692410+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:57.692537+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:58.692693+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:03:59.692818+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:00.693011+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:01.693194+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:02.693485+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:03.693592+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:04.693749+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:05.695094+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:06.695299+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:07.695427+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:08.695729+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:09.695893+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:10.696029+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:11.696231+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:12.696404+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:13.696514+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:14.696650+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:15.696904+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:16.697097+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:17.697266+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:18.697416+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:19.697593+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:20.697762+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:21.697959+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:22.698278+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:23.698456+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:24.698622+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:25.698789+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:26.698965+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:27.699137+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:28.699488+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:29.699660+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:30.699824+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:31.699968+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:32.700107+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:33.700304+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:34.700611+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:35.700762+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:36.700992+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:37.701139+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:38.701296+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:39.701469+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:40.701615+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:41.701759+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:42.701897+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a6cbe400 session 0x55c0a6898000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:43.702032+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:44.705938+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:45.706087+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:46.706404+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:47.706569+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:48.706764+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:49.707072+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:50.707383+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932026 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:51.707549+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:52.707844+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:53.708024+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 71.532615662s of 71.780799866s, submitted: 10
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:54.708173+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:55.708444+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932158 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 819200 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:56.708653+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 811008 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:57.708823+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 811008 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:58.709023+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 811008 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:04:59.709224+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:00.709469+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933686 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:01.709649+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:02.709823+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:03.710213+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:04.710387+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [1])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 802816 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:05.711479+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933686 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 786432 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:06.711767+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 786432 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:07.711911+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 786432 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:08.712079+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 786432 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:09.712234+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.876880646s of 15.956839561s, submitted: 10
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:10.712427+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933386 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:11.712716+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:12.712867+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:13.713053+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:14.713192+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:15.713503+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933538 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:16.713684+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:17.713828+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:18.713986+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:19.714160+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:20.714370+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933538 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:21.714526+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:22.714728+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:23.714895+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:24.715044+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:25.715206+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933538 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:26.715415+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:27.715598+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8b34000 session 0x55c0a9974f00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:28.716856+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:29.718591+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:30.718734+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933538 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:31.719174+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:32.719471+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 761856 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:33.719617+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 761856 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:34.719726+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:35.720049+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 761856 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933538 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:36.720398+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 761856 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:37.720529+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 761856 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:38.720742+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 761856 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.866167068s of 28.871553421s, submitted: 1
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:39.721038+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 745472 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:40.721207+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 745472 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933670 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:41.721443+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 745472 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:42.721572+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 778240 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:43.721704+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 778240 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:44.721930+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 778240 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:45.722553+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 778240 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935198 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:46.723161+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:47.723420+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:48.723562+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:49.723710+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:50.724049+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 770048 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:51.726985+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934591 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:52.727114+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:53.727305+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.879275322s of 15.913887978s, submitted: 11
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:54.727425+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:55.727684+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:56.727917+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934459 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:57.728059+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:58.728233+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:05:59.728467+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:00.728671+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:01.728798+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934459 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:02.728925+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:03.729119+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:04.729277+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:05.729424+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:06.729632+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934459 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:07.729890+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:08.730072+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:09.730249+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:10.730424+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:11.730708+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934459 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:12.733447+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:13.733603+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:14.733803+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8714800 session 0x55c0a94db680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:15.733948+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:16.734104+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934459 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:17.734203+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8838800 session 0x55c0a9441a40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:18.734363+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:19.734514+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:20.734680+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:21.734835+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934459 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:22.735163+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:23.735303+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:24.735424+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:25.735617+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.549528122s of 31.552978516s, submitted: 1
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:26.735955+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934591 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:27.736176+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 901120 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:28.736312+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6cbe400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 892928 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:29.736439+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:30.736603+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 876544 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:31.736808+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936251 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 868352 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:32.736943+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 860160 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:33.737091+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 860160 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:34.737292+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 860160 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:35.737454+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:36.737704+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937763 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 851968 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:37.737835+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.531599998s of 11.632777214s, submitted: 13
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:38.737973+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:39.738127+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:40.738321+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:41.738480+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937024 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:42.738627+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:43.739252+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:44.739616+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:45.742226+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:46.742502+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:47.742657+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:48.742929+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:49.743064+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:50.743209+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:51.743449+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:52.743563+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:53.743696+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:54.743861+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:55.744013+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:56.744241+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:57.744442+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:58.744593+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:06:59.744750+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:00.744879+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:01.745017+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:02.745432+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:03.745595+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:04.745756+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:05.745967+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:06.746203+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:07.746462+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:08.746627+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:09.746789+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:10.746941+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:11.747150+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:12.747423+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:13.747554+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:14.747676+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:15.747827+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:16.748049+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:17.748187+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:18.748320+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:19.748428+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:20.748601+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8838800 session 0x55c0a9519a40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:21.748733+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:22.748906+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:23.749018+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:24.749185+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:25.749338+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:26.749548+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:27.749709+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:28.749890+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:29.750049+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:30.750292+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:31.750428+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936892 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.480346680s of 54.496955872s, submitted: 4
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:32.750619+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:33.750770+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:34.750936+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 843776 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:35.751087+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:36.751343+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937040 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:37.751517+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 835584 heap: 82796544 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:38.751683+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:39.751841+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:40.752057+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:41.752300+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936449 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.133461952s of 10.089128494s, submitted: 10
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:42.752409+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:43.752567+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:44.752707+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:45.752880+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:46.753088+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:47.753267+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:48.753474+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:49.753623+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:50.753877+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:51.754039+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:52.754255+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:53.754494+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:54.754673+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:55.754782+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:56.754971+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:57.755106+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:58.755240+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:59.755418+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:00.755546+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:01.755672+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:02.755878+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:03.756058+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:04.756252+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:05.756421+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:06.756960+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:07.757084+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:08.757202+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:09.757318+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:10.757416+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:11.757538+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:12.757671+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:13.757789+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:14.757948+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:15.758160+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:16.758394+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:17.758511+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:18.758678+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:19.758840+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:20.758982+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:21.759112+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:22.759262+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:23.759410+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8839000 session 0x55c0a9517c20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:24.759536+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:25.759657+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:26.759836+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:27.759989+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:28.760189+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:29.760325+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:30.760459+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:31.760600+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:32.760757+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:33.760889+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:34.761006+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:35.761137+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.339496613s of 53.345134735s, submitted: 2
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:36.761411+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935842 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:37.761527+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:38.761667+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:39.761839+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:40.762044+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:41.762205+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935858 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:42.762492+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:43.762642+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:44.762848+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:45.762981+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:46.763589+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935858 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:47.763744+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:48.763879+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.945819855s of 12.972918510s, submitted: 9
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:49.764258+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:50.764425+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:51.764550+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935558 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:52.764665+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:53.764835+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:54.765117+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:55.765322+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:56.765563+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:57.765728+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:58.765915+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:59.766058+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:00.766208+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:01.766409+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:02.766609+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:03.766782+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:04.766928+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:05.767162+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:06.767406+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:07.767560+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:08.767697+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:09.767853+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1966080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:10.768028+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1966080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:11.768178+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7882 writes, 31K keys, 7882 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7882 writes, 1550 syncs, 5.09 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 677 writes, 1212 keys, 677 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 677 writes, 322 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:12.768326+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:13.768528+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:14.768674+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:15.768806+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:16.768991+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:17.769162+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:18.769419+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:19.769646+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:20.769973+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:21.770115+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:22.770321+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:23.770526+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:24.770669+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:25.770829+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:26.771020+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:27.771169+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:28.771425+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:29.771676+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:30.771927+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:31.772203+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:32.772387+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:33.772570+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:34.772709+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:35.772837+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:36.773014+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:37.773179+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:38.773446+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:39.773643+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:40.773826+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:41.773961+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:42.774127+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:43.774366+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:44.774559+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:45.774696+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:46.774883+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:47.775015+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:48.775144+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:49.775279+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:50.775410+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:51.775554+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:52.775679+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:53.775846+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:54.775964+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:55.776114+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:56.776337+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:57.776567+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:58.779215+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:59.779368+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:00.779516+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:01.779694+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:02.779904+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:03.780046+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:04.780264+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:05.780512+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:06.780699+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:07.780846+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:08.781058+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:09.781272+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8714800 session 0x55c0a87010e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:10.781407+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:11.781585+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:12.781769+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:13.781896+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:14.782015+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:15.782179+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:16.782397+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:17.782522+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:18.782643+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:19.782783+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:20.782931+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:21.783052+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:22.783223+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:23.783370+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 95.271827698s of 95.274726868s, submitted: 1
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:24.783533+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:25.783694+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:26.783902+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:27.784029+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:28.784195+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:29.784332+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:30.784631+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:31.784789+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:32.785010+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:33.785209+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:34.785466+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:35.785640+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:36.785846+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:37.786051+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:38.786237+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:39.786433+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:40.786551+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:41.786677+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:42.786825+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:43.787030+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:44.787199+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:45.787405+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:46.787699+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:47.787846+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:48.788054+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:49.788212+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:50.788382+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:51.788605+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:52.788787+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:53.788966+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:54.789133+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:55.789265+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:56.789514+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:57.790491+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:58.790781+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:59.790919+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:00.791059+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:01.791190+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:02.791338+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:03.791458+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:04.791628+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:05.791729+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:06.791884+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:07.792005+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:08.792143+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:09.792261+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:10.792396+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:11.792501+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:12.792632+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:13.792749+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:14.792878+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:15.793032+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:16.793203+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:17.793320+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:18.793411+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:19.793526+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:20.793634+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:21.793821+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:22.794081+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:23.794207+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:24.794337+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:25.794530+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:26.794735+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:27.794906+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:28.795031+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:29.795176+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:30.795340+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:31.795575+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:32.795759+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:33.795927+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:34.796072+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:35.796212+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:36.796405+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:37.796526+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:38.796682+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:39.796851+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:40.797041+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:41.797190+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:42.797414+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:43.797556+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:44.797725+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:45.797893+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:46.798115+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:47.798280+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:48.798495+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:49.798659+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:50.798816+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:51.798971+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:52.799132+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a88732c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:53.799280+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a6cbe400 session 0x55c0a78ad2c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:54.799431+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:55.799621+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:56.799849+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:57.799962+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:58.800100+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:59.800289+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:00.800426+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:01.800565+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:02.800712+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6cbe400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.687911987s of 99.724082947s, submitted: 12
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:03.800879+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:04.801005+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,1])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:05.801304+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:06.801558+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 1384448 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:07.801787+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937502 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 327680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:08.801942+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 327680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:09.802077+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 1376256 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:10.802250+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 1359872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:11.802450+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1335296 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:12.802658+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937486 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:13.802883+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:14.803064+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:15.803188+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.640565872s of 12.511781693s, submitted: 245
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:16.803462+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:17.803625+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939014 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:18.803793+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:19.803984+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:20.804140+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:21.804292+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:22.804560+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:23.804721+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 1236992 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:24.804908+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:25.805125+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:26.805433+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:27.805593+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:28.805774+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:29.805962+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:30.806192+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:31.806315+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:32.806538+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:33.806693+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:34.806877+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:35.807090+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:36.807336+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:37.807568+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:38.807787+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:39.807949+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:40.808398+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:41.808599+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:42.808825+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8b34000 session 0x55c0a9519680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:43.808997+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:44.809155+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:45.809389+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:46.809757+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:47.810032+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:48.810239+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:49.810412+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:50.810599+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:51.810753+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:52.810986+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:53.811141+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.789596558s of 37.869590759s, submitted: 10
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:54.811332+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:55.811541+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:56.811855+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:57.812078+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939803 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:58.812213+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:59.812414+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:00.812521+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:01.812667+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:02.812892+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939635 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:03.813069+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.943451881s of 10.137590408s, submitted: 10
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:04.813502+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:05.813660+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:06.813856+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:07.813989+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939196 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:08.814112+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:09.814306+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:10.814433+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:11.814604+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:12.814745+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:13.814849+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:14.815048+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:15.815203+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:16.815437+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:17.815580+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:18.815784+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:19.816326+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:20.816551+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:21.816847+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:22.817033+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:23.817228+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:24.817425+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:25.817644+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:26.817885+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:27.818082+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:28.818285+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:29.818423+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:30.818580+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:31.819205+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:32.819368+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:33.820386+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:34.821113+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:35.821317+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:36.821833+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:37.822189+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:38.822577+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:39.822718+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:40.822919+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:41.823232+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:42.823573+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:43.823877+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:44.824043+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:45.824220+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:46.824476+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:47.824616+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:48.824884+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:49.825049+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:50.825213+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:51.825453+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:52.825585+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:53.825712+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:54.825896+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:55.826134+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:56.826322+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:57.826491+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:58.826649+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:59.826819+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:00.826989+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:01.827157+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:02.827329+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:03.827496+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:04.827690+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:05.827872+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:06.828046+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:07.828199+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:08.828393+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:09.828648+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:10.828803+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:11.828934+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:12.829094+0000)
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.25927 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.16452 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/261526027' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2049800932' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.25936 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/600541726' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1794080162' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4156927190' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3699757616' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4136571834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2345853878' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3596812779' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1267146393' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-mon[74335]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:13.829195+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:14.829338+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:15.829517+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:16.829698+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:17.829843+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:18.829982+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:19.830205+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:20.830382+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:21.830516+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:22.830679+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:23.830875+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:24.831038+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:25.831169+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:26.831427+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:27.831610+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:28.831737+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:29.831898+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:30.832034+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:31.832201+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:32.832337+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:33.832522+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:34.832720+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:35.832909+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:36.834712+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:37.835922+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:38.836169+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:39.836376+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:40.884013+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:41.884889+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:42.885058+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:43.885389+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:44.885569+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:45.885760+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:46.885980+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:47.886270+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:48.886436+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:49.886866+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:50.887067+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:51.887276+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:52.887408+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:53.887773+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:54.887936+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:55.888129+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:56.888409+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:57.888663+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:58.888849+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:59.889023+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:00.889177+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:01.889406+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:02.889557+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:03.889713+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:04.889891+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:05.890151+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:06.890380+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:07.890547+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 123.779830933s of 123.839828491s, submitted: 2
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:08.890699+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946520 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 81920 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:09.890890+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 8282112 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fc1f5000/0x0/0x4ffc00000, data 0x56002c/0x615000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _renew_subs
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 141 ms_handle_reset con 0x55c0a8839000 session 0x55c0a980a780
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:10.891060+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:11.891167+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:12.891454+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 142 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a7927680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:13.891665+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987291 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:14.891856+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc1ed000/0x0/0x4ffc00000, data 0x56426f/0x61d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:15.892075+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:16.892286+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:17.892501+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:18.892628+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:19.892803+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:20.892946+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:21.893104+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:22.893235+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:23.893327+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:24.893496+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:25.893768+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:26.894215+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:27.894339+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a6cbe400 session 0x55c0a78a8b40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:28.894508+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:29.894656+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:30.894795+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:31.894942+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:32.895094+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:33.895221+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:34.895420+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:35.895589+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:36.895872+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:37.896037+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a8714800 session 0x55c0a689cb40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:38.896419+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6cbe400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.314317703s of 31.454385757s, submitted: 41
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:39.897610+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:40.898679+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:41.901797+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:42.902780+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:43.904843+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988425 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:44.905685+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a8838800 session 0x55c0a8cfbc20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:45.905873+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:46.906573+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:47.907575+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:48.907724+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988557 data_alloc: 218103808 data_used: 126976
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:49.908211+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a980a960
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a8b34000 session 0x55c0a9519860
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 3588096 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:50.908585+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 3588096 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:51.908807+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.712681770s of 12.734780312s, submitted: 7
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _renew_subs
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 89628672 unmapped: 3530752 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:52.908961+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3800 session 0x55c0a89fad20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8b34400 session 0x55c0a7927e00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8838800 session 0x55c0a8cfb0e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3800 session 0x55c0a94b70e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a8701a40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 8953856 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:53.909116+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd89000/0x0/0x4ffc00000, data 0x9c746d/0xa83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8b34000 session 0x55c0a7928d20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045402 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd89000/0x0/0x4ffc00000, data 0x9c746d/0xa83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 8953856 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:54.909438+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 8953856 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:55.909600+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c82000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8c82000 session 0x55c0a90f94a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 8937472 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:56.909893+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3800 session 0x55c0a88723c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 8937472 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:57.910102+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a8cf4b40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd89000/0x0/0x4ffc00000, data 0x9c746d/0xa83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd89000/0x0/0x4ffc00000, data 0x9c746d/0xa83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 8945664 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:58.910250+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046235 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 8937472 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:59.910434+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:00.910580+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:01.910753+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _renew_subs
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.127929688s of 10.011770248s, submitted: 43
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbd85000/0x0/0x4ffc00000, data 0x9c943f/0xa86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:02.910909+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:03.911069+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080893 data_alloc: 234881024 data_used: 9277440
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:04.911214+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:05.911330+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:06.911611+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbd86000/0x0/0x4ffc00000, data 0x9c943f/0xa86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:07.911828+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbd86000/0x0/0x4ffc00000, data 0x9c943f/0xa86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:08.912089+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079314 data_alloc: 234881024 data_used: 9281536
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:09.912238+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:10.912432+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:11.912635+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 2785280 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa66f000/0x0/0x4ffc00000, data 0xf3a43f/0xff7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:12.913451+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.363606453s of 10.592675209s, submitted: 82
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101515264 unmapped: 1089536 heap: 102604800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:13.914280+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103063552 unmapped: 589824 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:14.914925+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 581632 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:15.915663+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 581632 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:16.916239+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 581632 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:17.916533+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:18.916769+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:19.917047+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:20.917527+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:21.917798+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:22.918152+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:23.918488+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:24.918687+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:25.918868+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:26.919021+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:27.919177+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:28.919318+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:29.919516+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:30.919649+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:31.919796+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103088128 unmapped: 565248 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:32.919957+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103088128 unmapped: 565248 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:33.920259+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103088128 unmapped: 565248 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:34.920429+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103088128 unmapped: 565248 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:35.920547+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 557056 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:36.921130+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 557056 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:37.921272+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 557056 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:38.921454+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 557056 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a980ab40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6890400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a6890400 session 0x55c0a8cfa000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:39.921606+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103104512 unmapped: 548864 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:40.921766+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5e400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.849666595s of 28.051139832s, submitted: 24
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101679104 unmapped: 1974272 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5e400 session 0x55c0a94db680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:41.921905+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6890400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 7168000 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a6890400 session 0x55c0a8cfa5a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:42.922045+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 7168000 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:43.922204+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 7168000 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159061 data_alloc: 234881024 data_used: 10600448
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:44.922400+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 7536640 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:45.922752+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 7536640 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:46.923189+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:47.923526+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:48.923829+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159061 data_alloc: 234881024 data_used: 10600448
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:49.924063+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a78a9c20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:50.924284+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 7544832 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:51.924426+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 7544832 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:52.924961+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 7544832 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:53.925194+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 7544832 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8ad3800 session 0x55c0a89fa1e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159061 data_alloc: 234881024 data_used: 10600448
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:54.925416+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 7536640 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:55.925579+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 7536640 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a8addc20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5e000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.989857674s of 15.692141533s, submitted: 33
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5e000 session 0x55c0a78ccd20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:56.925963+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6890400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:57.926174+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101392384 unmapped: 7593984 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:58.926498+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102801408 unmapped: 6184960 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178338 data_alloc: 234881024 data_used: 12828672
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:59.926718+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f2000/0x0/0x4ffc00000, data 0x12bb4c4/0x137a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:00.926894+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f2000/0x0/0x4ffc00000, data 0x12bb4c4/0x137a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:01.927044+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:02.927246+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:03.927421+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178338 data_alloc: 234881024 data_used: 12828672
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:04.927642+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:05.927867+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f2000/0x0/0x4ffc00000, data 0x12bb4c4/0x137a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:06.928099+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103014400 unmapped: 5971968 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f2000/0x0/0x4ffc00000, data 0x12bb4c4/0x137a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:07.928334+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103030784 unmapped: 5955584 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.189620018s of 12.223476410s, submitted: 19
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:08.928507+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 4136960 heap: 110739456 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230102 data_alloc: 234881024 data_used: 13017088
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:09.928682+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9ef7000/0x0/0x4ffc00000, data 0x16b64c4/0x1775000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 5947392 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:10.928805+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 5431296 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b5a000/0x0/0x4ffc00000, data 0x1a524c4/0x1b11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:11.928981+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 5210112 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:12.929167+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 5210112 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:13.929304+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b5a000/0x0/0x4ffc00000, data 0x1a524c4/0x1b11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 5177344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253840 data_alloc: 234881024 data_used: 13660160
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:14.929573+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 5177344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:15.929759+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 5177344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:16.930009+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 6201344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:17.930772+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b3c000/0x0/0x4ffc00000, data 0x1a714c4/0x1b30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 6201344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a6890400 session 0x55c0a689d680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:18.931065+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c89c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.077943802s of 10.287703514s, submitted: 78
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 6184960 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c89c00 session 0x55c0a78abc20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137898 data_alloc: 234881024 data_used: 10588160
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:19.931243+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:20.931600+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa658000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:21.931757+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:22.932246+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:23.932456+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137898 data_alloc: 234881024 data_used: 10588160
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:24.932599+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:25.933001+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:26.933606+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa658000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:27.933794+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:28.934128+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.085984230s of 10.180105209s, submitted: 29
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8b34000 session 0x55c0a773dc20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c89000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137314 data_alloc: 234881024 data_used: 10588160
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:29.934263+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c89000 session 0x55c0a73663c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101392384 unmapped: 11452416 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:30.934586+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:31.934803+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:32.935176+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a980bc20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:33.935327+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026788 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:34.935472+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:35.935587+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:36.935762+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:37.935895+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:38.936084+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026788 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:39.936256+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:40.936411+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:41.936796+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:42.936964+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.142122269s of 14.206800461s, submitted: 25
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:43.937149+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026920 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:44.937382+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:45.937541+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:46.937977+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 12189696 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:47.938253+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 12189696 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:48.938563+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 12189696 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:49.938727+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026936 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:50.939018+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:51.939220+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:52.939434+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a980a5a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a8ad9680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ecc00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ecc00 session 0x55c0a8ad90e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89edc00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89edc00 session 0x55c0a9440d20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:53.939607+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a94403c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:54.939795+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026936 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a73683c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:55.939964+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.481446266s of 12.513579369s, submitted: 11
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a7369680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 21381120 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:56.940218+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 21381120 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:57.940427+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 21381120 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:58.940614+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 21372928 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:59.940883+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078832 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 21372928 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a7368000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:00.941151+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ecc00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa99e000/0x0/0x4ffc00000, data 0xc10462/0xcce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 21315584 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:01.941387+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89edc00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 21864448 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:02.941556+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:03.941789+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:04.941969+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127938 data_alloc: 234881024 data_used: 11554816
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:05.942180+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa99e000/0x0/0x4ffc00000, data 0xc10462/0xcce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:06.942413+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa99e000/0x0/0x4ffc00000, data 0xc10462/0xcce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:07.942628+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103497728 unmapped: 18866176 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:08.942840+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103497728 unmapped: 18866176 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:09.943017+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127938 data_alloc: 234881024 data_used: 11554816
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103497728 unmapped: 18866176 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:10.943154+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103497728 unmapped: 18866176 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:11.943435+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa99e000/0x0/0x4ffc00000, data 0xc10462/0xcce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.397214890s of 16.504899979s, submitted: 21
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 17924096 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:12.983317+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa071000/0x0/0x4ffc00000, data 0x153d462/0x15fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a94412c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107651072 unmapped: 14712832 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:13.983493+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 14606336 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:14.983665+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202938 data_alloc: 234881024 data_used: 11608064
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 14557184 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:15.983799+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:16.984044+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:17.984248+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f83000/0x0/0x4ffc00000, data 0x162b462/0x16e9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:18.984416+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:19.984539+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202202 data_alloc: 234881024 data_used: 11612160
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:20.984670+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 15196160 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:21.984815+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 15753216 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:22.984997+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 15753216 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.089375496s of 11.247861862s, submitted: 88
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:23.985218+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f5f000/0x0/0x4ffc00000, data 0x164f462/0x170d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 16146432 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:24.985393+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202094 data_alloc: 234881024 data_used: 11612160
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 16146432 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:25.985543+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 16146432 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:26.985789+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f5f000/0x0/0x4ffc00000, data 0x164f462/0x170d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 16097280 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:27.986095+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 16080896 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:28.986232+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 16080896 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:29.986362+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205214 data_alloc: 234881024 data_used: 11608064
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 16072704 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:30.986855+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 16072704 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f59000/0x0/0x4ffc00000, data 0x1655462/0x1713000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:31.987005+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 16072704 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:32.987221+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f59000/0x0/0x4ffc00000, data 0x1655462/0x1713000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 16072704 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:33.987344+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:34.987541+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205470 data_alloc: 234881024 data_used: 11608064
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:35.987689+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:36.987886+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f56000/0x0/0x4ffc00000, data 0x1658462/0x1716000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:37.988112+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.554450035s of 14.728706360s, submitted: 16
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:38.988437+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:39.988572+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205794 data_alloc: 234881024 data_used: 11620352
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f56000/0x0/0x4ffc00000, data 0x1658462/0x1716000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 15826944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89edc00 session 0x55c0a8700960
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:40.988717+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 15826944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ecc00 session 0x55c0a89d4960
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:41.988878+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 15826944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:42.989135+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 20619264 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:43.989306+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a94dbc20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:44.989444+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:45.989584+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:46.989767+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:47.989900+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:48.990037+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:49.990189+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:50.990315+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:51.990475+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:52.990607+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:53.990746+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:54.990914+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:55.991044+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:56.991200+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:57.991323+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:58.991451+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:59.991584+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:00.991730+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:01.991888+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:02.992068+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:03.992201+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:04.992449+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:05.992590+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:06.992818+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:07.992988+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:08.993137+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:09.993320+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:10.993453+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 9331 writes, 35K keys, 9331 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9331 writes, 2167 syncs, 4.31 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1449 writes, 4381 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 4.46 MB, 0.01 MB/s
                                           Interval WAL: 1449 writes, 617 syncs, 2.35 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:11.993613+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.843894958s of 33.845417023s, submitted: 36
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a8cf4d20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:12.993762+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:13.993884+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:14.994034+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081228 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:15.994244+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:16.994493+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faaad000/0x0/0x4ffc00000, data 0xb0243f/0xbbf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faaad000/0x0/0x4ffc00000, data 0xb0243f/0xbbf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:17.994662+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:18.994849+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a8ad0d20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101933056 unmapped: 24633344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:19.994981+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8882c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085381 data_alloc: 218103808 data_used: 4796416
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101801984 unmapped: 24764416 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:20.995171+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:21.995407+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:22.995610+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa88000/0x0/0x4ffc00000, data 0xb26462/0xbe4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:23.995780+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa88000/0x0/0x4ffc00000, data 0xb26462/0xbe4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:24.995918+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117757 data_alloc: 234881024 data_used: 9527296
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:25.996071+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:26.996245+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:27.996428+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:28.996584+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:29.996699+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117757 data_alloc: 234881024 data_used: 9527296
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa88000/0x0/0x4ffc00000, data 0xb26462/0xbe4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:30.996841+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:31.996991+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:32.997162+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:33.997311+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:34.997403+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119885 data_alloc: 234881024 data_used: 9584640
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:35.997538+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa88000/0x0/0x4ffc00000, data 0xb26462/0xbe4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.795618057s of 23.862209320s, submitted: 13
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 21291008 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:36.997693+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 21291008 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:37.997829+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:38.997953+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 21168128 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa828000/0x0/0x4ffc00000, data 0xd86462/0xe44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:39.998111+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144469 data_alloc: 234881024 data_used: 9867264
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:40.998269+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:41.998405+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:42.998633+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:43.998769+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:44.998937+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144469 data_alloc: 234881024 data_used: 9867264
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:45.999129+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:46.999380+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:47.999583+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:48.999818+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:49.999987+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144469 data_alloc: 234881024 data_used: 9867264
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:51.000136+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:52.000292+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:53.000435+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:54.000558+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:55.000757+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144469 data_alloc: 234881024 data_used: 9867264
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883c00 session 0x55c0a8ad1a40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a8cfb680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a9441680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883c00 session 0x55c0a8546b40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:56.000962+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105144320 unmapped: 21422080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:57.001213+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105144320 unmapped: 21422080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:58.001456+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105144320 unmapped: 21422080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.838689804s of 21.900722504s, submitted: 29
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:59.002246+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105144320 unmapped: 21422080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0xd8e48b/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a78c0780
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b748b/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:00.002482+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ecc00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106749952 unmapped: 19816448 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ecc00 session 0x55c0a9529c20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192758 data_alloc: 234881024 data_used: 9867264
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:01.002940+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106758144 unmapped: 19808256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:02.003078+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106758144 unmapped: 19808256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b74c4/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a8cfa960
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:03.003445+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 20258816 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:04.003604+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 20258816 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:05.004064+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b74c4/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 19873792 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237059 data_alloc: 234881024 data_used: 15400960
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:06.004281+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:07.004492+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b74c4/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:08.004951+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:09.005392+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b74c4/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:10.005775+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237059 data_alloc: 234881024 data_used: 15400960
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:11.006445+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.794046402s of 12.929645538s, submitted: 35
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 16523264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:12.008223+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 16523264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:13.008436+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 16523264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f4000/0x0/0x4ffc00000, data 0x13b84c4/0x1477000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:14.008598+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 16523264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f4000/0x0/0x4ffc00000, data 0x13b84c4/0x1477000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:15.008766+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111370240 unmapped: 15196160 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274463 data_alloc: 234881024 data_used: 16379904
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:16.008932+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 115621888 unmapped: 10944512 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:17.009112+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 10412032 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:18.009273+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 10412032 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9da6000/0x0/0x4ffc00000, data 0x18074c4/0x18c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:19.009446+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:20.009618+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288467 data_alloc: 234881024 data_used: 17231872
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:21.009768+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:22.009909+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9da6000/0x0/0x4ffc00000, data 0x18074c4/0x18c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:23.010056+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:24.010180+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.063606262s of 13.290143013s, submitted: 63
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:25.010336+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883c00 session 0x55c0a94db0e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288567 data_alloc: 234881024 data_used: 17240064
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a9529680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a9528d20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:26.011048+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 16154624 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a89d4d20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa4df000/0x0/0x4ffc00000, data 0xd8f462/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:27.011457+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:28.011639+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:29.011914+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:30.012095+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa4df000/0x0/0x4ffc00000, data 0xd8f462/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a8adda40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8882c00 session 0x55c0a99310e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151023 data_alloc: 218103808 data_used: 9027584
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:31.012348+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:32.012632+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a8cfa780
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:33.013566+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:34.013729+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:35.013848+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051737 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:36.013984+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.560731888s of 12.156168938s, submitted: 49
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:37.014483+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:38.014622+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:39.015282+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:40.015521+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107782144 unmapped: 18784256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051593 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:41.015653+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107782144 unmapped: 18784256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:42.015886+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107782144 unmapped: 18784256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:43.016053+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:44.016207+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:45.016524+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050834 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:46.016650+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:47.016926+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:48.017112+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.856632233s of 12.034231186s, submitted: 10
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:49.017320+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:50.017501+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050263 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:51.017656+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:52.017791+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:53.017936+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:54.018064+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:55.018394+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050263 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:56.018532+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:57.018736+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:58.018882+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:59.019084+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:00.019208+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050263 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:01.019362+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:02.019584+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:03.019783+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:04.019964+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:05.020123+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.572757721s of 16.855772018s, submitted: 2
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 18563072 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067963 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:06.020285+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 18554880 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:07.020557+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0x70643f/0x7c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 18554880 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:08.020697+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a7369680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:09.020933+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:10.021077+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067963 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a95170e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:11.021258+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a95165a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:12.021409+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0x70643f/0x7c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8882c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8882c00 session 0x55c0a9516d20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:13.021528+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a9517860
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:14.021743+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea8000/0x0/0x4ffc00000, data 0x70644f/0x7c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:15.021946+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073425 data_alloc: 218103808 data_used: 5316608
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:16.022110+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea8000/0x0/0x4ffc00000, data 0x70644f/0x7c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:17.022411+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:18.022558+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:19.022720+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:20.022933+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073425 data_alloc: 218103808 data_used: 5316608
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:21.023136+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea8000/0x0/0x4ffc00000, data 0x70644f/0x7c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:22.023261+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:23.023416+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8838800 session 0x55c0a94db2c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:24.023638+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:25.023776+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea8000/0x0/0x4ffc00000, data 0x70644f/0x7c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 18505728 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.093435287s of 20.120613098s, submitted: 9
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087727 data_alloc: 218103808 data_used: 5349376
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:26.023907+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 17842176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:27.024440+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 17793024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:28.024722+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 17408000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:29.024904+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 17408000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:30.025108+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 17408000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122385 data_alloc: 218103808 data_used: 5582848
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:31.025240+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9e2000/0x0/0x4ffc00000, data 0xbb244f/0xc70000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 17399808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:32.025458+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 17399808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:33.025622+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 18046976 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:34.025840+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 18046976 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:35.026003+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 18046976 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117053 data_alloc: 218103808 data_used: 5582848
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:36.026889+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f9000/0x0/0x4ffc00000, data 0xbb544f/0xc73000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 18046976 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:37.027118+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f9000/0x0/0x4ffc00000, data 0xbb544f/0xc73000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.935380936s of 12.115594864s, submitted: 71
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:38.027446+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:39.027618+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:40.028530+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f9000/0x0/0x4ffc00000, data 0xbb544f/0xc73000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117069 data_alloc: 218103808 data_used: 5578752
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:41.028823+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:42.029137+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:43.029301+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:44.029471+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8838000 session 0x55c0a78a9e00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:45.029656+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f9000/0x0/0x4ffc00000, data 0xbb544f/0xc73000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117069 data_alloc: 218103808 data_used: 5578752
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:46.029787+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a5ebb400 session 0x55c0a66c2f00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:47.029994+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:48.030139+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:49.030277+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:50.030730+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f8000/0x0/0x4ffc00000, data 0xbb644f/0xc74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117277 data_alloc: 218103808 data_used: 5582848
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:51.030977+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.792579651s of 13.853042603s, submitted: 8
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:52.031138+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8d7b000 session 0x55c0a6aaf0e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8882c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:53.031267+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a90f81e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a8587860
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a78bf860
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78cc000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8742000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:54.031559+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8742000 session 0x55c0a78aaf00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a9519a40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78a9e00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a78a9c20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a78a8b40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:55.031795+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e45e/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:56.031966+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140291 data_alloc: 218103808 data_used: 5582848
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:57.032168+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c89000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c89000 session 0x55c0a78adc20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e45e/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:58.032343+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a78ac3c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78ad2c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:59.032642+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a956e1e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:00.032831+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 18497536 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:01.032987+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154601 data_alloc: 218103808 data_used: 7262208
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8d7b800 session 0x55c0a78be000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 18497536 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:02.033238+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 18497536 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0xd8e46e/0xe4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:03.033444+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 18497536 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:04.033634+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.119801521s of 13.166754723s, submitted: 15
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108093440 unmapped: 18472960 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:05.033762+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18350080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:06.033884+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154993 data_alloc: 218103808 data_used: 7430144
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 18186240 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:07.034064+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0xd8e46e/0xe4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 18112512 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:08.034252+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 18112512 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:09.034442+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 17063936 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:10.034571+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 17063936 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:11.034731+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157705 data_alloc: 218103808 data_used: 7430144
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81b000/0x0/0x4ffc00000, data 0xd8f46e/0xe4f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 11935744 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:12.034878+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:13.035052+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:14.035216+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:15.035410+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99ea000/0x0/0x4ffc00000, data 0x1bc146e/0x1c81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:16.035544+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a8ad5a40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276755 data_alloc: 218103808 data_used: 9060352
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:17.035707+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:18.035889+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.644134521s of 13.565666199s, submitted: 360
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99e9000/0x0/0x4ffc00000, data 0x1bc346e/0x1c83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 11608064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:19.036009+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 11608064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99e9000/0x0/0x4ffc00000, data 0x1bc346e/0x1c83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:20.036143+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 11608064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:21.036277+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268627 data_alloc: 218103808 data_used: 9060352
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 11608064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:22.036666+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99e8000/0x0/0x4ffc00000, data 0x1bc446e/0x1c84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 11599872 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:23.036822+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99e8000/0x0/0x4ffc00000, data 0x1bc446e/0x1c84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 11599872 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:24.036969+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 11599872 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:25.037107+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 11599872 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:26.037254+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268691 data_alloc: 218103808 data_used: 9060352
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a90f81e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a8700780
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 11591680 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:27.037467+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a5ece3c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:28.037639+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:29.037805+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.570550919s of 11.678001404s, submitted: 20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:30.037968+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:31.038133+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130310 data_alloc: 218103808 data_used: 5578752
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:32.038293+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:33.038462+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:34.038620+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:35.038788+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:36.038937+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130310 data_alloc: 218103808 data_used: 5578752
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:37.039133+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:38.039289+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:39.039426+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:40.039553+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.610730171s of 10.671720505s, submitted: 9
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 14475264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:41.039653+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a87005a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a89fb680
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130682 data_alloc: 218103808 data_used: 5578752
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 14467072 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:42.039769+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a73663c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:43.039864+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:44.039967+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:45.040088+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:46.040246+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:47.040431+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:48.040563+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:49.040692+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:50.040788+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:51.040935+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:52.041058+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:53.041247+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:54.041418+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:55.041552+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:56.041689+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:57.041926+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:58.042048+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a6cbe400 session 0x55c0a956fe00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:59.042158+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:00.042278+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:01.042442+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:02.042576+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:03.042707+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:04.042850+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:05.042980+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:06.043110+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:07.043269+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:08.043458+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:09.043628+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.895616531s of 28.939466476s, submitted: 15
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:10.043768+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:11.043947+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072077 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:12.044093+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a78ab860
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:13.044237+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 14573568 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:14.044424+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faf2f000/0x0/0x4ffc00000, data 0x68043f/0x73d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 14573568 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faf2f000/0x0/0x4ffc00000, data 0x68043f/0x73d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:15.044592+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:16.044744+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faf2f000/0x0/0x4ffc00000, data 0x68043f/0x73d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082839 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a89fc780
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:17.045041+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:18.045260+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:19.045402+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:20.045539+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 14573568 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faf2e000/0x0/0x4ffc00000, data 0x680462/0x73e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:21.045669+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 14655488 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084189 data_alloc: 218103808 data_used: 4796416
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:22.045797+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 14655488 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a90f8f00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:23.045919+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 14655488 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.563888550s of 13.634863853s, submitted: 20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a94da780
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:24.046042+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:25.046218+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:26.046384+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075050 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:27.046593+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:28.047551+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:29.047739+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:30.047880+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:31.048088+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075050 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:32.048263+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:33.048437+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:34.048593+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:35.048751+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:36.048913+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075050 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:37.049134+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 14630912 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.995434761s of 14.031690598s, submitted: 13
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a66c3c20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:38.049260+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112312320 unmapped: 14254080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:39.049414+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:40.049568+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:41.049716+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a7366b40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082456 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78c14a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8d7b800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8d7b800 session 0x55c0a95165a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:42.049921+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2400 session 0x55c0a78cbc20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a689c1e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:43.050093+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:44.050225+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:45.050444+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:46.051127+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 14270464 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082760 data_alloc: 218103808 data_used: 4825088
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:47.051328+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 14270464 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:48.051449+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 14270464 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a8ad9e00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:49.051651+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 14270464 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.093137741s of 12.102007866s, submitted: 5
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:50.051913+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 15073280 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a8cfab40
Jan 23 10:30:14 compute-0 ceph-osd[82641]: mgrc ms_handle_reset ms_handle_reset con 0x55c0a8c5ec00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4198923246
Jan 23 10:30:14 compute-0 ceph-osd[82641]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4198923246,v1:192.168.122.100:6801/4198923246]
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: get_auth_request con 0x55c0a8883800 auth_method 0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: mgrc handle_mgr_configure stats_period=5
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:51.052153+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075806 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:52.052404+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:53.052509+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:54.052692+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a956f0e0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:55.052845+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:56.052990+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075806 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:57.053190+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:58.053309+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:59.053491+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:00.053716+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:01.053952+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075806 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:02.054166+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:03.054282+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:04.054454+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:05.054593+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8d7b800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.253648758s of 16.266384125s, submitted: 4
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:06.054850+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075938 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:07.055121+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:08.055271+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:09.055468+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:10.055652+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:11.055829+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075822 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:12.056006+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:13.056144+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:14.056328+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:15.056514+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:16.056683+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075822 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:17.056818+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:18.056935+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:19.057084+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.407391548s of 13.959419250s, submitted: 5
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:20.057220+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:21.057444+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075690 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:22.057605+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:23.057701+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:24.058508+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:25.058655+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:26.058794+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 15335424 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075690 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:27.058975+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a95185a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a980b4a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a73692c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a8546d20
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 15335424 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a6aae960
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a95292c0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a94c2f00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a94da000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a7368000
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:28.059120+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:29.059296+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:30.059464+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa95a000/0x0/0x4ffc00000, data 0xc5444f/0xd12000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:31.059599+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133978 data_alloc: 218103808 data_used: 4788224
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:32.059767+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa95a000/0x0/0x4ffc00000, data 0xc5444f/0xd12000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:33.059905+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.359125137s of 13.556776047s, submitted: 14
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20676608 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:34.060048+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20676608 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a87014a0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2c00
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:35.060175+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20676608 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:36.060328+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20676608 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135247 data_alloc: 218103808 data_used: 4796416
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:37.060542+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 20512768 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:38.060684+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa959000/0x0/0x4ffc00000, data 0xc54472/0xd13000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:39.060822+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa959000/0x0/0x4ffc00000, data 0xc54472/0xd13000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:40.061000+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:41.061881+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183583 data_alloc: 234881024 data_used: 11939840
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:42.062017+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:43.062156+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa959000/0x0/0x4ffc00000, data 0xc54472/0xd13000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:44.062306+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:45.062702+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa959000/0x0/0x4ffc00000, data 0xc54472/0xd13000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:46.062829+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:14 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183583 data_alloc: 234881024 data_used: 11939840
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:47.063033+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:48.063227+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.850227356s of 15.030684471s, submitted: 4
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 13516800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:49.063641+0000)
Jan 23 10:30:14 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 12918784 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:14 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:50.063897+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0d8000/0x0/0x4ffc00000, data 0x10bd472/0x117c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,1,0,7,2])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 14065664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:51.065830+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 14065664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232297 data_alloc: 234881024 data_used: 13488128
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:52.066583+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:53.068094+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:54.068277+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:55.068491+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:56.069309+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0ce000/0x0/0x4ffc00000, data 0x10c7472/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235945 data_alloc: 234881024 data_used: 13746176
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:57.069745+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:58.069912+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:59.070460+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:00.071029+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0ce000/0x0/0x4ffc00000, data 0x10c7472/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:01.071187+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0ce000/0x0/0x4ffc00000, data 0x10c7472/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 13885440 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235945 data_alloc: 234881024 data_used: 13746176
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0ce000/0x0/0x4ffc00000, data 0x10c7472/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:02.071397+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2c00 session 0x55c0a90f9e00
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 13885440 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.456407547s of 14.740316391s, submitted: 71
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:03.071655+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a94c2780
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:04.071973+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:05.072188+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:06.072376+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083725 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:07.072614+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:08.072773+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:09.073008+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:10.073266+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:11.073451+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083725 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:12.073582+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:13.073758+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a8cf4b40
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:14.073968+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:15.074120+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:16.074248+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083725 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:17.074550+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:18.074752+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:19.074974+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:20.075166+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:21.075396+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083725 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:22.075544+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:23.075711+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:24.075916+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.819961548s of 21.865715027s, submitted: 18
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a78a8b40
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:25.076039+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a6aafa40
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:26.076298+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138998 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:27.076933+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:28.077105+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:29.077467+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:30.077602+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:31.077719+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 20144128 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172894 data_alloc: 234881024 data_used: 9756672
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [1])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:32.077870+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:33.078031+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:34.078177+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:35.078340+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:36.078548+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193870 data_alloc: 234881024 data_used: 12111872
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:37.078744+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:38.078905+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:39.079047+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:40.079252+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:41.079395+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.707281113s of 16.824014664s, submitted: 24
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114368512 unmapped: 18014208 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193738 data_alloc: 234881024 data_used: 12111872
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:42.079553+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114376704 unmapped: 18006016 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:43.079680+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 14639104 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:44.079831+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa010000/0x0/0x4ffc00000, data 0x118e4a1/0x124c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 14639104 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:45.079968+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af3000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af3000 session 0x55c0a66c23c0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:46.080084+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9bc2000/0x0/0x4ffc00000, data 0x15dc4a1/0x169a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268072 data_alloc: 234881024 data_used: 12111872
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:47.080298+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:48.080525+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9bc2000/0x0/0x4ffc00000, data 0x15dc4a1/0x169a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:49.080728+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:50.080936+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:51.081206+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266352 data_alloc: 234881024 data_used: 12115968
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:52.081343+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.506553650s of 10.660974503s, submitted: 56
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a78aaf00
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 14065664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:53.081647+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 14065664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:54.081934+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b9b000/0x0/0x4ffc00000, data 0x16034a1/0x16c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:55.082227+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:56.082477+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293193 data_alloc: 234881024 data_used: 15368192
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:57.082677+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:58.082879+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b9b000/0x0/0x4ffc00000, data 0x16034a1/0x16c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:59.083153+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b9b000/0x0/0x4ffc00000, data 0x16034a1/0x16c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:00.083451+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:01.083579+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293193 data_alloc: 234881024 data_used: 15368192
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:02.083691+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:03.083889+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b9b000/0x0/0x4ffc00000, data 0x16034a1/0x16c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:04.084148+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.563364029s of 12.585421562s, submitted: 7
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 10600448 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:05.084290+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 9797632 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:06.084501+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123068416 unmapped: 9314304 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:07.084718+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332875 data_alloc: 234881024 data_used: 15872000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97b9000/0x0/0x4ffc00000, data 0x19df4a1/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:08.084901+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:09.085032+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:10.085138+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:11.085338+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:12.085547+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332875 data_alloc: 234881024 data_used: 15872000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x19ef4a1/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:13.085683+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:14.085902+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:15.086274+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:16.086555+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.927405357s of 11.535178185s, submitted: 52
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:17.086751+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333043 data_alloc: 234881024 data_used: 15872000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x19ef4a1/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:18.087001+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:19.087325+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:20.087502+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:21.087646+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:22.087893+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333043 data_alloc: 234881024 data_used: 15872000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:23.088102+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x19ef4a1/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:24.088304+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78cd4a0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a95290e0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 10067968 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:25.088471+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 10067968 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:26.088638+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97af000/0x0/0x4ffc00000, data 0x19ef4a1/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,5])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.795674324s of 10.131553650s, submitted: 3
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 12763136 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:27.088992+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241079 data_alloc: 234881024 data_used: 12226560
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119627776 unmapped: 12754944 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:28.089147+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:29.089427+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9fee000/0x0/0x4ffc00000, data 0x11b04a1/0x126e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a9528780
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:30.089648+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:31.089787+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:32.089974+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234211 data_alloc: 234881024 data_used: 12115968
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:33.090112+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:34.090317+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9fee000/0x0/0x4ffc00000, data 0x11b04a1/0x126e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:35.090520+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:36.090690+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:37.090963+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234211 data_alloc: 234881024 data_used: 12115968
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9fee000/0x0/0x4ffc00000, data 0x11b04a1/0x126e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:38.091174+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:39.091430+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:40.091630+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.307956696s of 13.755240440s, submitted: 22
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a7926d20
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:41.091761+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:42.091981+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234139 data_alloc: 234881024 data_used: 12115968
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:43.092193+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9fee000/0x0/0x4ffc00000, data 0x11b04a1/0x126e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:44.092446+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:45.092642+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:46.092978+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:47.093231+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a78cbe00
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:48.093463+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:49.093657+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:50.093795+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:51.093921+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:52.094072+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:53.094223+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:54.094343+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:55.094499+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:56.094654+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:57.094827+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:58.094944+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:59.095086+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:00.095211+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:01.095419+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:02.095572+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:03.095799+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:04.095954+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:05.096137+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:06.096391+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:07.096647+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:08.096802+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:09.096999+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:10.097209+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:11.097370+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:12.097518+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:13.097695+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:14.097877+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:15.098087+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:16.098248+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:17.098433+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:18.098622+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:19.101838+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:20.101990+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:21.102133+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:22.102290+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:23.102586+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:24.102758+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:25.102977+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.846614838s of 44.900455475s, submitted: 16
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 18030592 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fab1f000/0x0/0x4ffc00000, data 0x68043f/0x73d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:26.103155+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a87012c0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa28f000/0x0/0x4ffc00000, data 0xf1043f/0xfcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:27.103384+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166153 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:28.103547+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa28f000/0x0/0x4ffc00000, data 0xf1043f/0xfcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:29.103709+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a8ad63c0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a85194a0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:30.103874+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a6aaf0e0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:31.104013+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:32.104143+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166153 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa28f000/0x0/0x4ffc00000, data 0xf1043f/0xfcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:33.104328+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:34.104508+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a6aafa40
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:35.104641+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 29376512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:36.104745+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 29360128 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:37.104929+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 29360128 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170403 data_alloc: 218103808 data_used: 4796416
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:38.105091+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa26a000/0x0/0x4ffc00000, data 0xf3444f/0xff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 29745152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:39.105259+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:40.105441+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:41.105554+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa26a000/0x0/0x4ffc00000, data 0xf3444f/0xff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:42.105660+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230899 data_alloc: 234881024 data_used: 13778944
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:43.105791+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:44.105982+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:45.106137+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa26a000/0x0/0x4ffc00000, data 0xf3444f/0xff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:46.106259+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:47.106461+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230899 data_alloc: 234881024 data_used: 13778944
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:48.106616+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 26132480 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa26a000/0x0/0x4ffc00000, data 0xf3444f/0xff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:49.106776+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 26116096 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.689573288s of 24.789501190s, submitted: 12
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:50.106922+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa008000/0x0/0x4ffc00000, data 0x119644f/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:51.107060+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:52.107210+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251219 data_alloc: 234881024 data_used: 13832192
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:53.107427+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa008000/0x0/0x4ffc00000, data 0x119644f/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:54.107584+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:55.107732+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:56.107881+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa008000/0x0/0x4ffc00000, data 0x119644f/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 23355392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:57.108090+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 23355392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283733 data_alloc: 234881024 data_used: 13811712
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:58.108263+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 23355392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:59.108477+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 23355392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b7c000/0x0/0x4ffc00000, data 0x162244f/0x16e0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:00.108628+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.881953239s of 10.221186638s, submitted: 45
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 23330816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:01.108796+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 22953984 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:02.109223+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121143296 unmapped: 22790144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294573 data_alloc: 234881024 data_used: 14086144
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:03.109398+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 22740992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:04.110042+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 22740992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b44000/0x0/0x4ffc00000, data 0x165144f/0x170f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:05.110169+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:06.110316+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:07.110914+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295789 data_alloc: 234881024 data_used: 14213120
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b44000/0x0/0x4ffc00000, data 0x165144f/0x170f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:08.111208+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x165444f/0x1712000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:09.111414+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:10.111562+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a78ad0e0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a89faf00
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:11.111713+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af3800
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.106009483s of 11.168287277s, submitted: 28
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:12.111929+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111229 data_alloc: 218103808 data_used: 4902912
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af3800 session 0x55c0a89fb2c0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:13.112087+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x59043f/0x64d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:14.112223+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:15.112379+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:16.112501+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:17.112660+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:18.112837+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:19.113066+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:20.113228+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:21.113420+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:22.113566+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:23.113698+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:24.113836+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:25.114045+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:26.114228+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:27.114547+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:28.114690+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:29.115066+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:30.115394+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:31.115589+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:32.115779+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:33.115936+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:34.116114+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:35.116240+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:36.116440+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:37.116915+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:38.117179+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:39.117397+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:40.117567+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:41.117692+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:42.117893+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:43.118029+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:44.118196+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:45.118400+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:46.118577+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:47.118965+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:48.119139+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:49.119275+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:50.119507+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:51.119727+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:52.120012+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:53.120255+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:54.120413+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:55.120558+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:56.120814+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:57.121050+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:58.121200+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:59.121404+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:00.121558+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:01.121743+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:02.121913+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:03.122042+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:04.122221+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:05.122401+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:06.122557+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:07.122754+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:08.122936+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:09.123076+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:10.123223+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:11.123374+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 2975 syncs, 3.79 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1946 writes, 6900 keys, 1946 commit groups, 1.0 writes per commit group, ingest: 8.66 MB, 0.01 MB/s
                                           Interval WAL: 1946 writes, 808 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:12.123593+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:13.123736+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:14.123877+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:15.124025+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:16.124297+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:17.124601+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:18.124806+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:19.124977+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:20.125123+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:21.125298+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:22.125497+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:23.125670+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:24.125906+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:25.126068+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:26.126293+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:27.126566+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:28.126785+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:29.126975+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:30.127176+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:31.127333+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:32.127471+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:33.127602+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:34.127812+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:35.127960+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:36.128086+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:37.128259+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:38.128397+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:39.128531+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:40.128658+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:41.128764+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: do_command 'config diff' '{prefix=config diff}'
Jan 23 10:30:15 compute-0 ceph-osd[82641]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:42.128891+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: do_command 'config show' '{prefix=config show}'
Jan 23 10:30:15 compute-0 ceph-osd[82641]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 10:30:15 compute-0 ceph-osd[82641]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 10:30:15 compute-0 ceph-osd[82641]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 10:30:15 compute-0 ceph-osd[82641]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:30:15 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:30:15 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:43.129037+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:30:15 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:30:15 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:44.129174+0000)
Jan 23 10:30:15 compute-0 ceph-osd[82641]: do_command 'log dump' '{prefix=log dump}'
Jan 23 10:30:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:15 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26027 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16518 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:30:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261349115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:30:15 compute-0 nova_compute[249229]: 2026-01-23 10:30:15.275 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:30:15 compute-0 nova_compute[249229]: 2026-01-23 10:30:15.280 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:30:15 compute-0 nova_compute[249229]: 2026-01-23 10:30:15.301 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:30:15 compute-0 nova_compute[249229]: 2026-01-23 10:30:15.303 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:30:15 compute-0 nova_compute[249229]: 2026-01-23 10:30:15.303 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:30:15 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:30:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:30:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/644955657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25987 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:15.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:15 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26048 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16539 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.25993 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mgr[74633]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:30:15 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T10:30:15.714+0000 7f28655d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:30:15 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26002 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 23 10:30:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2029648398' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.25985 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.16494 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.26003 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3571602394' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4293863724' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3550185203' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.26027 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2261349115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2952901814' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/834363629' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4201206189' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/644955657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:30:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1559929094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:30:16 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26066 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:16 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16548 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:16 compute-0 nova_compute[249229]: 2026-01-23 10:30:16.215 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 23 10:30:16 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/129791328' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:30:16 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26084 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:16 compute-0 crontab[274836]: (root) LIST (root)
Jan 23 10:30:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:16.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:16 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16569 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 10:30:16 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3938838710' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:30:16 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26029 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:16 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26032 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.16518 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.25987 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.26048 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.16539 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.25993 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3107004277' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.26002 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2029648398' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.26066 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.16548 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2687778899' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4004933432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/129791328' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1506185174' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4224081289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3938838710' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:17 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26111 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:17 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26056 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16587 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 23 10:30:17 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169827845' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:30:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:17.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:17 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26126 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26071 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:17.840Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:17 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16593 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 23 10:30:17 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790870186' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.26084 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.16569 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.26029 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.26032 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/737140567' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26141 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3169827845' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/873021322' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/232885380' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/753443402' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/790870186' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26089 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:18 compute-0 nova_compute[249229]: 2026-01-23 10:30:18.238 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:30:18 compute-0 nova_compute[249229]: 2026-01-23 10:30:18.239 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:30:18 compute-0 nova_compute[249229]: 2026-01-23 10:30:18.285 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:18 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16617 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26156 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:18.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:18 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26098 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16635 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 23 10:30:18 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3337747780' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 23 10:30:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:18.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:19 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26113 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.26111 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.26056 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.16587 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.26126 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.26071 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.16593 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.26141 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.26089 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2692355636' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/641666540' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2848004950' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2869222915' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3337747780' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3799754458' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:19 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16653 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 23 10:30:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457950240' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 23 10:30:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 10:30:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:19.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 10:30:19 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16668 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 23 10:30:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946892865' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26137 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:19] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:30:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:19] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:30:20
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['volumes', 'images', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log']
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:30:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:30:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:30:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 23 10:30:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112946453' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 23 10:30:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1755085481' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.16617 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.26156 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.26098 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.16635 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.26113 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1425965098' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.16653 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/457950240' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/641995280' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3233910781' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2946892865' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1463104035' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3682934773' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:30:20 compute-0 systemd[1]: Starting Hostname Service...
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26158 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 systemd[1]: Started Hostname Service.
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:30:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 23 10:30:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256833078' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:30:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:20.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 23 10:30:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062298466' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 nova_compute[249229]: 2026-01-23 10:30:20.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:30:20 compute-0 nova_compute[249229]: 2026-01-23 10:30:20.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:30:20 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26173 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 23 10:30:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1191692492' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 23 10:30:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 23 10:30:21 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1439010025' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 23 10:30:21 compute-0 nova_compute[249229]: 2026-01-23 10:30:21.218 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:21 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26231 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 23 10:30:21 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786861485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 10:30:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:21.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 23 10:30:21 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493257669' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:21 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26243 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 10:30:21 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879524475' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 10:30:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 23 10:30:22 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2855196845' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 23 10:30:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 23 10:30:22 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569001041' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 10:30:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 23 10:30:22 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526295899' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:22.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:22 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16767 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 23 10:30:22 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694179319' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:23 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16779 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:23 compute-0 nova_compute[249229]: 2026-01-23 10:30:23.286 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:23 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16785 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:23.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:23.725Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.16668 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.26137 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2112946453' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1755085481' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/341034865' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/331196613' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1013832290' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.26158 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/256833078' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4062298466' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1621252379' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/758305960' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.26173 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1191692492' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/15662595' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mon[74335]: pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1439010025' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 23 10:30:23 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16797 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 23 10:30:24 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531849906' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 23 10:30:24 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16809 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:24.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 23 10:30:24 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648245877' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 23 10:30:24 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16821 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:25 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16836 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 23 10:30:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/340871671' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:25.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 23 10:30:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836282561' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 23 10:30:25 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16866 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:26 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16887 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 nova_compute[249229]: 2026-01-23 10:30:26.221 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.26231 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1786861485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2098994895' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2493257669' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.26243 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1879524475' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2855196845' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1569001041' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/526295899' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.16767 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1090320711' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3694179319' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.16779 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3613864487' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.16785 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2294747835' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1692553818' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.16797 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2531849906' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 23 10:30:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:26.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:26 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26321 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:27 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26357 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3854792418' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3166484618' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.16809 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1648245877' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.16821 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2182990666' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.16836 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/340871671' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1855905813' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2836282561' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.16866 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.16887 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4080595786' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2610036103' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.26321 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3569729491' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:27 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:27 compute-0 ceph-mon[74335]: pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 23 10:30:27 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246982503' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 23 10:30:27 compute-0 sudo[276256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:30:27 compute-0 sudo[276256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:27 compute-0 sudo[276256]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:27.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:27 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26387 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:27.841Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:30:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:27.842Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:30:27 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16956 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:27 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26393 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26399 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:28 compute-0 nova_compute[249229]: 2026-01-23 10:30:28.287 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.26357 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/896818155' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2246982503' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/931928461' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2736972453' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.26387 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.16956 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2037715614' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.26393 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/26149318' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.26399 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1004606240' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/292578968' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3039266935' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26414 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:28.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Jan 23 10:30:28 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849795310' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26426 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:28.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:30:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:28.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 23 10:30:29 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687676278' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26438 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26335 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.26414 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4061140840' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/631308360' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/584663929' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3849795310' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.26426 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1961868252' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3026965908' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2920006099' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3687676278' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 23 10:30:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:29.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:29 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26347 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 23 10:30:29 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3723003283' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 23 10:30:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:29] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:30:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:29] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:30:30 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26359 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:30 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26450 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:30 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.16995 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:30 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26365 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:30 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26371 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:30.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:30 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 23 10:30:30 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321049884' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 23 10:30:30 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26380 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:31 compute-0 nova_compute[249229]: 2026-01-23 10:30:31.222 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:31 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 23 10:30:31 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3112698172' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 23 10:30:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:31.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:32 compute-0 ceph-mon[74335]: from='client.26438 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:32 compute-0 ceph-mon[74335]: from='client.26335 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1997317476' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 10:30:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1049295026' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 23 10:30:32 compute-0 ceph-mon[74335]: from='client.26347 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3723003283' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 23 10:30:32 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/629032271' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 10:30:32 compute-0 ceph-mon[74335]: from='client.26359 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:32 compute-0 ceph-mon[74335]: from='client.26450 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:32 compute-0 podman[277053]: 2026-01-23 10:30:32.194766582 +0000 UTC m=+0.208532272 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 10:30:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:32.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:32 compute-0 ovs-appctl[277478]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 23 10:30:32 compute-0 ovs-appctl[277491]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 23 10:30:32 compute-0 ovs-appctl[277498]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 23 10:30:32 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26389 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:32 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26395 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mon[74335]: from='client.16995 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mon[74335]: from='client.26365 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mon[74335]: from='client.26371 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1786797257' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3321049884' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mon[74335]: from='client.26380 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mon[74335]: pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:33 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3112698172' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1709192928' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26477 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:33 compute-0 nova_compute[249229]: 2026-01-23 10:30:33.332 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 23 10:30:33 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510438922' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26404 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:33.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:33.727Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:30:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:33.729Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:33 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17031 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26416 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:33 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26489 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17040 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mon[74335]: from='client.26389 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mon[74335]: from='client.26395 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mon[74335]: from='client.26477 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mon[74335]: pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:34 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3510438922' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3880234350' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/368403482' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3916140662' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26498 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 23 10:30:34 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2654627831' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 23 10:30:34 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/965967208' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 23 10:30:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 23 10:30:34 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1237615592' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:30:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.26404 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.17031 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.26416 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.26489 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.17040 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2654627831' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/965967208' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2586594643' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1237615592' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3359416428' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2023447290' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17085 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26525 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:35.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17094 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26534 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:35 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:30:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 23 10:30:36 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/112016515' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 23 10:30:36 compute-0 nova_compute[249229]: 2026-01-23 10:30:36.225 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='client.26498 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='client.17085 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='client.26525 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='client.17094 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='client.26534 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/112016515' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/514541159' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:30:36 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:30:36 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 23 10:30:36 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3084282709' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 23 10:30:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:36.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:36 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26567 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:36 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17130 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:37 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26573 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:37 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26579 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:37 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17139 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:37.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:37 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2493126188' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 23 10:30:37 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3084282709' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 23 10:30:37 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/25636672' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 23 10:30:37 compute-0 ceph-mon[74335]: from='client.26567 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:37 compute-0 ceph-mon[74335]: from='client.17130 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:37 compute-0 ceph-mon[74335]: from='client.26573 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:37 compute-0 ceph-mon[74335]: pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 23 10:30:37 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010755244' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:30:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:37.842Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:30:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:37.842Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:30:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:37.843Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:30:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 23 10:30:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2274561256' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 23 10:30:38 compute-0 nova_compute[249229]: 2026-01-23 10:30:38.333 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:38.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 23 10:30:38 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820475493' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.26579 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.17139 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1362458980' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1176070681' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2010755244' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/419775306' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/571241287' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2274561256' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2918126809' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/372859611' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3820475493' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:38.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:30:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:38.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:38 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26606 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:39 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17172 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:39 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26590 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Jan 23 10:30:39 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/49428748' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:39.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:39] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:30:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:39] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:30:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Jan 23 10:30:40 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128500998' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 23 10:30:40 compute-0 ceph-mon[74335]: from='client.26606 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:40 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1920671338' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 23 10:30:40 compute-0 ceph-mon[74335]: from='client.17172 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:40 compute-0 ceph-mon[74335]: pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:40 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1796333743' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:40 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/49428748' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 23 10:30:40 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2904729046' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:40.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Jan 23 10:30:40 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550268996' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:41 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26623 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17208 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 nova_compute[249229]: 2026-01-23 10:30:41.229 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:41 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26642 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.26590 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4128500998' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/50610532' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1655420808' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2904729046' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2704307379' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1965759482' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2550268996' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2288402487' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.26623 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mon[74335]: from='client.17208 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:41.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:41 compute-0 podman[279291]: 2026-01-23 10:30:41.690297328 +0000 UTC m=+0.071225624 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Jan 23 10:30:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Jan 23 10:30:41 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2641249001' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:41 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26641 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Jan 23 10:30:42 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114055830' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:42 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26650 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:42.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:42 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17229 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mon[74335]: from='client.26642 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/22422002' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2641249001' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/560283955' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mon[74335]: from='client.26641 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/114055830' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3879520383' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:42 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26663 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Jan 23 10:30:43 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946102600' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:43 compute-0 nova_compute[249229]: 2026-01-23 10:30:43.336 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26674 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17244 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:43.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:43.730Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26675 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26683 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:30:43 compute-0 ceph-mon[74335]: from='client.26650 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mon[74335]: from='client.17229 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1755609551' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mon[74335]: from='client.26663 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/946102600' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mon[74335]: pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:43 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3090469002' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3587918853' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:43 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17250 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:44 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26681 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:44 compute-0 virtqemud[248554]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 10:30:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Jan 23 10:30:44 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100141440' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:44.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Jan 23 10:30:44 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020289235' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:44 compute-0 systemd[1]: Starting Time & Date Service...
Jan 23 10:30:45 compute-0 systemd[1]: Started Time & Date Service.
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.26674 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.17244 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.26675 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.26683 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.17250 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.26681 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3100141440' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4118000991' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1756811035' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2020289235' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3681758551' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 23 10:30:45 compute-0 sudo[279832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:30:45 compute-0 sudo[279832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:45 compute-0 sudo[279832]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:45 compute-0 sudo[279868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:30:45 compute-0 sudo[279868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17274 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26710 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26705 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:30:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:45 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:30:45 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:45.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17280 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 sudo[279868]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26714 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26716 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:45 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:30:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Jan 23 10:30:46 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973156896' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:46 compute-0 nova_compute[249229]: 2026-01-23 10:30:46.233 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:46 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Jan 23 10:30:46 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1869920413' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:46.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:46 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/634711209' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:46 compute-0 ceph-mon[74335]: pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:46 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:46 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:46 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17301 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:30:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:30:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:30:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Jan 23 10:30:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:30:47 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17307 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:30:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:30:47 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:30:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:30:47 compute-0 sudo[280167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:30:47 compute-0 sudo[280167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:47 compute-0 sudo[280167]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:47 compute-0 sudo[280192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:30:47 compute-0 sudo[280192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:47 compute-0 sudo[280205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:30:47 compute-0 sudo[280205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:47 compute-0 sudo[280205]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:47.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 23 10:30:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944953467' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.17274 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.26710 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.26705 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.17280 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.26714 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.26716 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2973156896' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1942062714' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1869920413' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.17301 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3530506813' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1725430918' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3995885793' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1993038481' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1944953467' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:47.843Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:30:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:47.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:47 compute-0 podman[280293]: 2026-01-23 10:30:47.970221723 +0000 UTC m=+0.041474445 container create bc69db2024da6d18587e9402f79d6cdfac55ce376792e04fadbdbcd29066ab7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:30:47 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26740 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:48 compute-0 systemd[1]: Started libpod-conmon-bc69db2024da6d18587e9402f79d6cdfac55ce376792e04fadbdbcd29066ab7c.scope.
Jan 23 10:30:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:30:48 compute-0 podman[280293]: 2026-01-23 10:30:47.951099707 +0000 UTC m=+0.022352449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:30:48 compute-0 podman[280293]: 2026-01-23 10:30:48.053740926 +0000 UTC m=+0.124993668 container init bc69db2024da6d18587e9402f79d6cdfac55ce376792e04fadbdbcd29066ab7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 23 10:30:48 compute-0 podman[280293]: 2026-01-23 10:30:48.061741024 +0000 UTC m=+0.132993746 container start bc69db2024da6d18587e9402f79d6cdfac55ce376792e04fadbdbcd29066ab7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 23 10:30:48 compute-0 podman[280293]: 2026-01-23 10:30:48.065394009 +0000 UTC m=+0.136646831 container attach bc69db2024da6d18587e9402f79d6cdfac55ce376792e04fadbdbcd29066ab7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:30:48 compute-0 dazzling_varahamihira[280322]: 167 167
Jan 23 10:30:48 compute-0 systemd[1]: libpod-bc69db2024da6d18587e9402f79d6cdfac55ce376792e04fadbdbcd29066ab7c.scope: Deactivated successfully.
Jan 23 10:30:48 compute-0 podman[280293]: 2026-01-23 10:30:48.069130935 +0000 UTC m=+0.140383657 container died bc69db2024da6d18587e9402f79d6cdfac55ce376792e04fadbdbcd29066ab7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:30:48 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26741 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-19a8a46736e642b43776991020924630c991f5c049dee1f4242fb2d2bee44840-merged.mount: Deactivated successfully.
Jan 23 10:30:48 compute-0 podman[280293]: 2026-01-23 10:30:48.110784244 +0000 UTC m=+0.182036966 container remove bc69db2024da6d18587e9402f79d6cdfac55ce376792e04fadbdbcd29066ab7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 10:30:48 compute-0 systemd[1]: libpod-conmon-bc69db2024da6d18587e9402f79d6cdfac55ce376792e04fadbdbcd29066ab7c.scope: Deactivated successfully.
Jan 23 10:30:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Jan 23 10:30:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/955048904' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:48 compute-0 nova_compute[249229]: 2026-01-23 10:30:48.340 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:48 compute-0 podman[280347]: 2026-01-23 10:30:48.309744382 +0000 UTC m=+0.035342059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:30:48 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26747 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:48 compute-0 podman[280347]: 2026-01-23 10:30:48.538748288 +0000 UTC m=+0.264345945 container create 891a919344a33e3f4a32119714e8b4a1bb020046140960010f6c12f099f5f042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gates, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 10:30:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:30:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2448494540' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:30:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:30:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2448494540' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:30:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:48.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:48 compute-0 systemd[1]: Started libpod-conmon-891a919344a33e3f4a32119714e8b4a1bb020046140960010f6c12f099f5f042.scope.
Jan 23 10:30:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b731d7d0e3916337d10c185129064fb381a88938a470f9ae3690236d65ba65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b731d7d0e3916337d10c185129064fb381a88938a470f9ae3690236d65ba65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b731d7d0e3916337d10c185129064fb381a88938a470f9ae3690236d65ba65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b731d7d0e3916337d10c185129064fb381a88938a470f9ae3690236d65ba65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b731d7d0e3916337d10c185129064fb381a88938a470f9ae3690236d65ba65/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:48 compute-0 podman[280347]: 2026-01-23 10:30:48.777836622 +0000 UTC m=+0.503434299 container init 891a919344a33e3f4a32119714e8b4a1bb020046140960010f6c12f099f5f042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gates, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:30:48 compute-0 podman[280347]: 2026-01-23 10:30:48.783852393 +0000 UTC m=+0.509450080 container start 891a919344a33e3f4a32119714e8b4a1bb020046140960010f6c12f099f5f042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gates, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:30:48 compute-0 podman[280347]: 2026-01-23 10:30:48.7872313 +0000 UTC m=+0.512828977 container attach 891a919344a33e3f4a32119714e8b4a1bb020046140960010f6c12f099f5f042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gates, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:30:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:48.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:49 compute-0 laughing_gates[280366]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:30:49 compute-0 laughing_gates[280366]: --> All data devices are unavailable
Jan 23 10:30:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Jan 23 10:30:49 compute-0 systemd[1]: libpod-891a919344a33e3f4a32119714e8b4a1bb020046140960010f6c12f099f5f042.scope: Deactivated successfully.
Jan 23 10:30:49 compute-0 podman[280347]: 2026-01-23 10:30:49.123703272 +0000 UTC m=+0.849300929 container died 891a919344a33e3f4a32119714e8b4a1bb020046140960010f6c12f099f5f042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:30:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 23 10:30:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/966591196' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b731d7d0e3916337d10c185129064fb381a88938a470f9ae3690236d65ba65-merged.mount: Deactivated successfully.
Jan 23 10:30:49 compute-0 podman[280347]: 2026-01-23 10:30:49.389001094 +0000 UTC m=+1.114598751 container remove 891a919344a33e3f4a32119714e8b4a1bb020046140960010f6c12f099f5f042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_gates, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:30:49 compute-0 sudo[280192]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:49 compute-0 sudo[280393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:30:49 compute-0 systemd[1]: libpod-conmon-891a919344a33e3f4a32119714e8b4a1bb020046140960010f6c12f099f5f042.scope: Deactivated successfully.
Jan 23 10:30:49 compute-0 sudo[280393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:49 compute-0 sudo[280393]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:49 compute-0 sudo[280418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:30:49 compute-0 sudo[280418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:49.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:49 compute-0 ceph-mon[74335]: from='client.17307 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:49 compute-0 ceph-mon[74335]: from='client.26740 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:49 compute-0 ceph-mon[74335]: from='client.26741 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/955048904' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2304269384' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2448494540' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:30:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2448494540' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:30:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3049937054' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 23 10:30:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3762032723' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:49] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:30:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:49] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:30:49 compute-0 podman[280486]: 2026-01-23 10:30:49.99050159 +0000 UTC m=+0.059053446 container create 0128cdec790aba88dac062b9f3792e392daec2859de3c4eda060c76d90d38e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:30:50 compute-0 systemd[1]: Started libpod-conmon-0128cdec790aba88dac062b9f3792e392daec2859de3c4eda060c76d90d38e97.scope.
Jan 23 10:30:50 compute-0 podman[280486]: 2026-01-23 10:30:49.967580846 +0000 UTC m=+0.036132712 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:30:50 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:30:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:30:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:30:50 compute-0 podman[280486]: 2026-01-23 10:30:50.080085007 +0000 UTC m=+0.148636863 container init 0128cdec790aba88dac062b9f3792e392daec2859de3c4eda060c76d90d38e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wu, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 10:30:50 compute-0 podman[280486]: 2026-01-23 10:30:50.088192789 +0000 UTC m=+0.156744645 container start 0128cdec790aba88dac062b9f3792e392daec2859de3c4eda060c76d90d38e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:30:50 compute-0 wonderful_wu[280502]: 167 167
Jan 23 10:30:50 compute-0 systemd[1]: libpod-0128cdec790aba88dac062b9f3792e392daec2859de3c4eda060c76d90d38e97.scope: Deactivated successfully.
Jan 23 10:30:50 compute-0 podman[280486]: 2026-01-23 10:30:50.095447196 +0000 UTC m=+0.163999062 container attach 0128cdec790aba88dac062b9f3792e392daec2859de3c4eda060c76d90d38e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:30:50 compute-0 podman[280486]: 2026-01-23 10:30:50.09631104 +0000 UTC m=+0.164862886 container died 0128cdec790aba88dac062b9f3792e392daec2859de3c4eda060c76d90d38e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wu, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f181ad0bf0a8b47e9a189fd661f3dde981a1f2a2d8b2fb061524ef3c24a55b4-merged.mount: Deactivated successfully.
Jan 23 10:30:50 compute-0 podman[280486]: 2026-01-23 10:30:50.13065346 +0000 UTC m=+0.199205296 container remove 0128cdec790aba88dac062b9f3792e392daec2859de3c4eda060c76d90d38e97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_wu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:30:50 compute-0 systemd[1]: libpod-conmon-0128cdec790aba88dac062b9f3792e392daec2859de3c4eda060c76d90d38e97.scope: Deactivated successfully.
Jan 23 10:30:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:30:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:30:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:30:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:30:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:30:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:30:50 compute-0 podman[280525]: 2026-01-23 10:30:50.295163075 +0000 UTC m=+0.038756297 container create 0eebc86e97b318c0c601e301c2e5f900aabbcc7788883ca6fbb4c180d0451d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chatterjee, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:30:50 compute-0 systemd[1]: Started libpod-conmon-0eebc86e97b318c0c601e301c2e5f900aabbcc7788883ca6fbb4c180d0451d7c.scope.
Jan 23 10:30:50 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:30:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373e7766e83d2978afe834a6e663b15d1d8959136200f41de4e3fbdba23d506a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373e7766e83d2978afe834a6e663b15d1d8959136200f41de4e3fbdba23d506a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373e7766e83d2978afe834a6e663b15d1d8959136200f41de4e3fbdba23d506a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373e7766e83d2978afe834a6e663b15d1d8959136200f41de4e3fbdba23d506a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:50 compute-0 podman[280525]: 2026-01-23 10:30:50.279150718 +0000 UTC m=+0.022743960 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:30:50 compute-0 podman[280525]: 2026-01-23 10:30:50.461961806 +0000 UTC m=+0.205555058 container init 0eebc86e97b318c0c601e301c2e5f900aabbcc7788883ca6fbb4c180d0451d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Jan 23 10:30:50 compute-0 podman[280525]: 2026-01-23 10:30:50.470092718 +0000 UTC m=+0.213685950 container start 0eebc86e97b318c0c601e301c2e5f900aabbcc7788883ca6fbb4c180d0451d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chatterjee, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Jan 23 10:30:50 compute-0 podman[280525]: 2026-01-23 10:30:50.475004208 +0000 UTC m=+0.218597460 container attach 0eebc86e97b318c0c601e301c2e5f900aabbcc7788883ca6fbb4c180d0451d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:30:50 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26785 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:50.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]: {
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:     "1": [
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:         {
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "devices": [
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "/dev/loop3"
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             ],
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "lv_name": "ceph_lv0",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "lv_size": "21470642176",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "name": "ceph_lv0",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "tags": {
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.cluster_name": "ceph",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.crush_device_class": "",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.encrypted": "0",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.osd_id": "1",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.type": "block",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.vdo": "0",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:                 "ceph.with_tpm": "0"
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             },
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "type": "block",
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:             "vg_name": "ceph_vg0"
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:         }
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]:     ]
Jan 23 10:30:50 compute-0 zen_chatterjee[280542]: }
Jan 23 10:30:50 compute-0 systemd[1]: libpod-0eebc86e97b318c0c601e301c2e5f900aabbcc7788883ca6fbb4c180d0451d7c.scope: Deactivated successfully.
Jan 23 10:30:50 compute-0 podman[280525]: 2026-01-23 10:30:50.773560249 +0000 UTC m=+0.517153471 container died 0eebc86e97b318c0c601e301c2e5f900aabbcc7788883ca6fbb4c180d0451d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chatterjee, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-373e7766e83d2978afe834a6e663b15d1d8959136200f41de4e3fbdba23d506a-merged.mount: Deactivated successfully.
Jan 23 10:30:50 compute-0 podman[280525]: 2026-01-23 10:30:50.820709414 +0000 UTC m=+0.564302636 container remove 0eebc86e97b318c0c601e301c2e5f900aabbcc7788883ca6fbb4c180d0451d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:30:50 compute-0 systemd[1]: libpod-conmon-0eebc86e97b318c0c601e301c2e5f900aabbcc7788883ca6fbb4c180d0451d7c.scope: Deactivated successfully.
Jan 23 10:30:50 compute-0 sudo[280418]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:50 compute-0 sudo[280567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:30:50 compute-0 sudo[280567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:50 compute-0 sudo[280567]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:50 compute-0 ceph-mon[74335]: from='client.26747 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:50 compute-0 ceph-mon[74335]: pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Jan 23 10:30:50 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/966591196' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:50 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/736282130' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:30:50 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3027541159' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:50 compute-0 sudo[280592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:30:50 compute-0 sudo[280592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Jan 23 10:30:51 compute-0 nova_compute[249229]: 2026-01-23 10:30:51.236 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:51 compute-0 podman[280660]: 2026-01-23 10:30:51.465852786 +0000 UTC m=+0.039310053 container create 56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 23 10:30:51 compute-0 systemd[1]: Started libpod-conmon-56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121.scope.
Jan 23 10:30:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:30:51 compute-0 podman[280660]: 2026-01-23 10:30:51.536766869 +0000 UTC m=+0.110224156 container init 56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 10:30:51 compute-0 podman[280660]: 2026-01-23 10:30:51.543625945 +0000 UTC m=+0.117083212 container start 56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:30:51 compute-0 podman[280660]: 2026-01-23 10:30:51.450225429 +0000 UTC m=+0.023682726 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:30:51 compute-0 podman[280660]: 2026-01-23 10:30:51.546850427 +0000 UTC m=+0.120307714 container attach 56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poitras, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:30:51 compute-0 kind_poitras[280677]: 167 167
Jan 23 10:30:51 compute-0 systemd[1]: libpod-56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121.scope: Deactivated successfully.
Jan 23 10:30:51 compute-0 conmon[280677]: conmon 56a1e279cb41682791f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121.scope/container/memory.events
Jan 23 10:30:51 compute-0 podman[280660]: 2026-01-23 10:30:51.548800103 +0000 UTC m=+0.122257370 container died 56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poitras, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c9ae102cc8fb763caa15e3c4895b549753e77e742ca2131ba5913641fbd003a-merged.mount: Deactivated successfully.
Jan 23 10:30:51 compute-0 podman[280660]: 2026-01-23 10:30:51.589045041 +0000 UTC m=+0.162502318 container remove 56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_poitras, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:30:51 compute-0 systemd[1]: libpod-conmon-56a1e279cb41682791f6891328325709ceb8e7dfa7d0a38d9e79e191dfc7a121.scope: Deactivated successfully.
Jan 23 10:30:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:51.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:51 compute-0 podman[280701]: 2026-01-23 10:30:51.756658635 +0000 UTC m=+0.043362999 container create ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:30:51 compute-0 systemd[1]: Started libpod-conmon-ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5.scope.
Jan 23 10:30:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac74cf7b590a2f88dcfb93f6d01642bec64dc5befd1bac65fe43c83323d22267/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac74cf7b590a2f88dcfb93f6d01642bec64dc5befd1bac65fe43c83323d22267/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac74cf7b590a2f88dcfb93f6d01642bec64dc5befd1bac65fe43c83323d22267/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac74cf7b590a2f88dcfb93f6d01642bec64dc5befd1bac65fe43c83323d22267/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:30:51 compute-0 podman[280701]: 2026-01-23 10:30:51.739529566 +0000 UTC m=+0.026233940 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:30:51 compute-0 podman[280701]: 2026-01-23 10:30:51.83356316 +0000 UTC m=+0.120267564 container init ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:30:51 compute-0 podman[280701]: 2026-01-23 10:30:51.839733446 +0000 UTC m=+0.126437830 container start ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_euclid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:30:51 compute-0 podman[280701]: 2026-01-23 10:30:51.843699939 +0000 UTC m=+0.130404323 container attach ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_euclid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 10:30:51 compute-0 ceph-mon[74335]: from='client.26785 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:51 compute-0 ceph-mon[74335]: pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Jan 23 10:30:51 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2631473347' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:51 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3298491043' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:52 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26803 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:52 compute-0 lvm[280793]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:30:52 compute-0 lvm[280793]: VG ceph_vg0 finished
Jan 23 10:30:52 compute-0 nice_euclid[280718]: {}
Jan 23 10:30:52 compute-0 systemd[1]: libpod-ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5.scope: Deactivated successfully.
Jan 23 10:30:52 compute-0 podman[280701]: 2026-01-23 10:30:52.587878388 +0000 UTC m=+0.874582782 container died ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 23 10:30:52 compute-0 systemd[1]: libpod-ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5.scope: Consumed 1.138s CPU time.
Jan 23 10:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac74cf7b590a2f88dcfb93f6d01642bec64dc5befd1bac65fe43c83323d22267-merged.mount: Deactivated successfully.
Jan 23 10:30:52 compute-0 podman[280701]: 2026-01-23 10:30:52.631897494 +0000 UTC m=+0.918601858 container remove ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 10:30:52 compute-0 systemd[1]: libpod-conmon-ae2e2f382c63cc41739eb49cf38dd3054aeb3496bc09cb0801bd9c7eaedabea5.scope: Deactivated successfully.
Jan 23 10:30:52 compute-0 sudo[280592]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:30:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:52.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:52 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26815 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:30:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 512 B/s rd, 0 op/s
Jan 23 10:30:53 compute-0 ceph-mon[74335]: from='client.26803 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:53 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/814440401' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 23 10:30:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:53 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26821 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:53 compute-0 sudo[280811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:30:53 compute-0 sudo[280811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:30:53 compute-0 sudo[280811]: pam_unix(sudo:session): session closed for user root
Jan 23 10:30:53 compute-0 nova_compute[249229]: 2026-01-23 10:30:53.343 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:53.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:53.732Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:54 compute-0 ceph-mon[74335]: from='client.26815 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:54 compute-0 ceph-mon[74335]: pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 512 B/s rd, 0 op/s
Jan 23 10:30:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:30:54 compute-0 ceph-mon[74335]: from='client.26821 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1998309136' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 23 10:30:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2533419223' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:54.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:54 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26839 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26845 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:30:55 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:30:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:55.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:56 compute-0 nova_compute[249229]: 2026-01-23 10:30:56.240 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:56 compute-0 ceph-mon[74335]: from='client.26839 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:56 compute-0 ceph-mon[74335]: pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 513 B/s rd, 0 op/s
Jan 23 10:30:56 compute-0 ceph-mon[74335]: from='client.26845 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:30:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:56.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:30:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Jan 23 10:30:57 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26863 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:30:57 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.26869 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:30:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:57.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:30:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1831285676' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:30:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2165199586' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 23 10:30:57 compute-0 ceph-mon[74335]: pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Jan 23 10:30:57 compute-0 ceph-mon[74335]: from='client.26863 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:57.846Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:30:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:57.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:30:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:57.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:30:58 compute-0 nova_compute[249229]: 2026-01-23 10:30:58.347 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:30:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:58.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:58.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:30:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:30:58.932Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:30:59 compute-0 ceph-mon[74335]: from='client.26869 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:30:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4212707187' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4094141977' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 23 10:30:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:30:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:30:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:30:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:59.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:30:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:30:59.786 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:30:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:30:59.788 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:30:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:30:59.788 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:30:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:59] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:30:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:30:59] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:31:00 compute-0 ceph-mon[74335]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:00.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:01 compute-0 nova_compute[249229]: 2026-01-23 10:31:01.244 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:01.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:02 compute-0 ceph-mon[74335]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:02 compute-0 podman[280848]: 2026-01-23 10:31:02.591417442 +0000 UTC m=+0.114837699 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 10:31:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:02.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:03 compute-0 ceph-mon[74335]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:03 compute-0 nova_compute[249229]: 2026-01-23 10:31:03.348 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:03.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:03.734Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:04.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:31:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:31:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:31:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:05.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:06 compute-0 ceph-mon[74335]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:06 compute-0 nova_compute[249229]: 2026-01-23 10:31:06.246 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:06.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:07.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:07 compute-0 sudo[280879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:31:07 compute-0 sudo[280879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:07 compute-0 sudo[280879]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:07 compute-0 nova_compute[249229]: 2026-01-23 10:31:07.719 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:07.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:08 compute-0 nova_compute[249229]: 2026-01-23 10:31:08.350 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:08 compute-0 ceph-mon[74335]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:08.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:08.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:09 compute-0 ceph-mon[74335]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:09] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:31:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:09] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:31:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:10.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:11 compute-0 nova_compute[249229]: 2026-01-23 10:31:11.249 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:11.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:11 compute-0 nova_compute[249229]: 2026-01-23 10:31:11.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:11 compute-0 nova_compute[249229]: 2026-01-23 10:31:11.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 10:31:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:12 compute-0 ceph-mon[74335]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:12 compute-0 podman[280909]: 2026-01-23 10:31:12.516562729 +0000 UTC m=+0.048517516 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:31:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:12.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:12 compute-0 nova_compute[249229]: 2026-01-23 10:31:12.734 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:12 compute-0 nova_compute[249229]: 2026-01-23 10:31:12.735 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:31:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:13 compute-0 nova_compute[249229]: 2026-01-23 10:31:13.352 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:13 compute-0 ceph-mon[74335]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:13.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:13 compute-0 nova_compute[249229]: 2026-01-23 10:31:13.708 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:13.735Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:31:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:13.737Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:31:13 compute-0 nova_compute[249229]: 2026-01-23 10:31:13.742 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:13 compute-0 nova_compute[249229]: 2026-01-23 10:31:13.743 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:13 compute-0 nova_compute[249229]: 2026-01-23 10:31:13.770 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:31:13 compute-0 nova_compute[249229]: 2026-01-23 10:31:13.771 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:31:13 compute-0 nova_compute[249229]: 2026-01-23 10:31:13.771 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:31:13 compute-0 nova_compute[249229]: 2026-01-23 10:31:13.771 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:31:13 compute-0 nova_compute[249229]: 2026-01-23 10:31:13.772 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:31:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:31:14 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905912035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:31:14 compute-0 nova_compute[249229]: 2026-01-23 10:31:14.259 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:31:14 compute-0 nova_compute[249229]: 2026-01-23 10:31:14.414 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:31:14 compute-0 nova_compute[249229]: 2026-01-23 10:31:14.415 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4419MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:31:14 compute-0 nova_compute[249229]: 2026-01-23 10:31:14.416 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:31:14 compute-0 nova_compute[249229]: 2026-01-23 10:31:14.416 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:31:14 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2905912035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:31:14 compute-0 nova_compute[249229]: 2026-01-23 10:31:14.630 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:31:14 compute-0 nova_compute[249229]: 2026-01-23 10:31:14.630 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:31:14 compute-0 nova_compute[249229]: 2026-01-23 10:31:14.647 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:31:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:14.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:31:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147924141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:31:15 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.081 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.087 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:31:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.117 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.119 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.120 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:31:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:15.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:31:15 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/147924141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:31:15 compute-0 ceph-mon[74335]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.731 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.732 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.732 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 10:31:15 compute-0 nova_compute[249229]: 2026-01-23 10:31:15.755 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 10:31:16 compute-0 nova_compute[249229]: 2026-01-23 10:31:16.252 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:16.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:16 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4255286849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:31:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:17.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:17 compute-0 nova_compute[249229]: 2026-01-23 10:31:17.741 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:17.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:17 compute-0 ceph-mon[74335]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2961280680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:31:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/177182351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:31:18 compute-0 nova_compute[249229]: 2026-01-23 10:31:18.354 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:18 compute-0 nova_compute[249229]: 2026-01-23 10:31:18.632 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:18.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:18 compute-0 nova_compute[249229]: 2026-01-23 10:31:18.790 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:18.934Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:31:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:18.936Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:31:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:18.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:31:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/339495991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:31:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:19.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:19] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:31:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:19] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:31:20
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'vms', '.nfs', 'volumes', 'cephfs.cephfs.data', '.rgw.root']
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:31:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:31:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:31:20 compute-0 ceph-mon[74335]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:31:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:31:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:20.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:21 compute-0 nova_compute[249229]: 2026-01-23 10:31:21.256 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:31:21 compute-0 ceph-mon[74335]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:21 compute-0 nova_compute[249229]: 2026-01-23 10:31:21.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:22.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:22 compute-0 nova_compute[249229]: 2026-01-23 10:31:22.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:23 compute-0 nova_compute[249229]: 2026-01-23 10:31:23.356 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:23.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:23.738Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:24 compute-0 ceph-mon[74335]: pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:24 compute-0 nova_compute[249229]: 2026-01-23 10:31:24.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:31:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:24.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:25.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:26 compute-0 nova_compute[249229]: 2026-01-23 10:31:26.259 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:26 compute-0 ceph-mon[74335]: pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:26.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:27 compute-0 ceph-mon[74335]: pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:27.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:27 compute-0 sudo[280991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:31:27 compute-0 sudo[280991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:27 compute-0 sudo[280991]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:27.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:31:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:27.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:31:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:27.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:28 compute-0 nova_compute[249229]: 2026-01-23 10:31:28.359 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:28.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:28.937Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:31:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:28.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:29 compute-0 ceph-mon[74335]: pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:29.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:29] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:31:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:29] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:31:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:30.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:31 compute-0 nova_compute[249229]: 2026-01-23 10:31:31.262 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:31.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:31 compute-0 ceph-mon[74335]: pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:32.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:33 compute-0 nova_compute[249229]: 2026-01-23 10:31:33.512 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:33 compute-0 podman[281022]: 2026-01-23 10:31:33.677271582 +0000 UTC m=+0.127661445 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 10:31:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:33.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:33.740Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:34 compute-0 ceph-mon[74335]: pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:34.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:31:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:31:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:31:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:35.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:36 compute-0 nova_compute[249229]: 2026-01-23 10:31:36.265 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:36 compute-0 ceph-mon[74335]: pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:36.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:37 compute-0 ceph-mon[74335]: pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:37.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:37.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:38 compute-0 nova_compute[249229]: 2026-01-23 10:31:38.553 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:38.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:38.938Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:31:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:38.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:31:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:38.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:39.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:39] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 23 10:31:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:39] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 23 10:31:40 compute-0 ceph-mon[74335]: pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:40.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:41 compute-0 nova_compute[249229]: 2026-01-23 10:31:41.269 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:41 compute-0 ceph-mon[74335]: pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:41.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:42.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:42 compute-0 podman[281059]: 2026-01-23 10:31:42.751341129 +0000 UTC m=+0.053084346 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:31:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:43 compute-0 nova_compute[249229]: 2026-01-23 10:31:43.555 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:43.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:43.741Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:44 compute-0 sudo[272816]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:44 compute-0 sshd-session[272815]: Received disconnect from 192.168.122.10 port 59636:11: disconnected by user
Jan 23 10:31:44 compute-0 sshd-session[272815]: Disconnected from user zuul 192.168.122.10 port 59636
Jan 23 10:31:44 compute-0 sshd-session[272812]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:31:44 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Jan 23 10:31:44 compute-0 systemd[1]: session-56.scope: Consumed 3min 8.286s CPU time, 757.4M memory peak, read 268.3M from disk, written 77.4M to disk.
Jan 23 10:31:44 compute-0 systemd-logind[784]: Session 56 logged out. Waiting for processes to exit.
Jan 23 10:31:44 compute-0 systemd-logind[784]: Removed session 56.
Jan 23 10:31:44 compute-0 ceph-mon[74335]: pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:44 compute-0 sshd-session[281082]: Accepted publickey for zuul from 192.168.122.10 port 33218 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:31:44 compute-0 systemd-logind[784]: New session 57 of user zuul.
Jan 23 10:31:44 compute-0 systemd[1]: Started Session 57 of User zuul.
Jan 23 10:31:44 compute-0 sshd-session[281082]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:31:44 compute-0 sudo[281086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-01-23-kfwytsz.tar.xz
Jan 23 10:31:44 compute-0 sudo[281086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:31:44 compute-0 sudo[281086]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:44 compute-0 sshd-session[281085]: Received disconnect from 192.168.122.10 port 33218:11: disconnected by user
Jan 23 10:31:44 compute-0 sshd-session[281085]: Disconnected from user zuul 192.168.122.10 port 33218
Jan 23 10:31:44 compute-0 sshd-session[281082]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:31:44 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Jan 23 10:31:44 compute-0 systemd-logind[784]: Session 57 logged out. Waiting for processes to exit.
Jan 23 10:31:44 compute-0 systemd-logind[784]: Removed session 57.
Jan 23 10:31:44 compute-0 sshd-session[281112]: Accepted publickey for zuul from 192.168.122.10 port 33234 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:31:44 compute-0 systemd-logind[784]: New session 58 of user zuul.
Jan 23 10:31:44 compute-0 systemd[1]: Started Session 58 of User zuul.
Jan 23 10:31:44 compute-0 sshd-session[281112]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:31:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:44.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:44 compute-0 sudo[281116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 23 10:31:44 compute-0 sudo[281116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:31:44 compute-0 sudo[281116]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:44 compute-0 sshd-session[281115]: Received disconnect from 192.168.122.10 port 33234:11: disconnected by user
Jan 23 10:31:44 compute-0 sshd-session[281115]: Disconnected from user zuul 192.168.122.10 port 33234
Jan 23 10:31:44 compute-0 sshd-session[281112]: pam_unix(sshd:session): session closed for user zuul
Jan 23 10:31:44 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Jan 23 10:31:44 compute-0 systemd-logind[784]: Session 58 logged out. Waiting for processes to exit.
Jan 23 10:31:44 compute-0 systemd-logind[784]: Removed session 58.
Jan 23 10:31:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:45 compute-0 ceph-mon[74335]: pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:45.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:46 compute-0 nova_compute[249229]: 2026-01-23 10:31:46.273 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:47.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:47.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:47 compute-0 sudo[281143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:31:47 compute-0 sudo[281143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:47 compute-0 sudo[281143]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:48 compute-0 ceph-mon[74335]: pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:48 compute-0 nova_compute[249229]: 2026-01-23 10:31:48.558 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:31:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1552987279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:31:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:31:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1552987279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:31:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:48.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:48.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1552987279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:31:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1552987279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:31:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:49.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:49] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 23 10:31:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:49] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 23 10:31:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:31:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:31:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:31:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:31:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:31:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:31:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:31:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:31:50 compute-0 ceph-mon[74335]: pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:31:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:50.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:51 compute-0 nova_compute[249229]: 2026-01-23 10:31:51.276 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:51 compute-0 ceph-mon[74335]: pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:31:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:51.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:52.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:53 compute-0 sudo[281174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:31:53 compute-0 sudo[281174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:53 compute-0 sudo[281174]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:53 compute-0 nova_compute[249229]: 2026-01-23 10:31:53.558 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:53 compute-0 sudo[281199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:31:53 compute-0 sudo[281199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:53.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:53.742Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:31:53 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:31:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:54 compute-0 sudo[281199]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:54 compute-0 ceph-mon[74335]: pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:31:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:54 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:31:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:31:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:31:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:31:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:31:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:31:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:31:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:31:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:31:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:31:54 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:31:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:31:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:31:54 compute-0 sudo[281259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:31:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:31:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:54.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:31:54 compute-0 sudo[281259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:54 compute-0 sudo[281259]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:54 compute-0 sudo[281284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:31:54 compute-0 sudo[281284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:55 compute-0 podman[281351]: 2026-01-23 10:31:55.17493049 +0000 UTC m=+0.037758489 container create 1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_germain, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 10:31:55 compute-0 systemd[1]: Started libpod-conmon-1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57.scope.
Jan 23 10:31:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:31:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:31:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:31:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:31:55 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:31:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:31:55 compute-0 podman[281351]: 2026-01-23 10:31:55.158148501 +0000 UTC m=+0.020976520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:31:55 compute-0 podman[281351]: 2026-01-23 10:31:55.258723961 +0000 UTC m=+0.121551980 container init 1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_germain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:31:55 compute-0 podman[281351]: 2026-01-23 10:31:55.265502065 +0000 UTC m=+0.128330064 container start 1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_germain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:31:55 compute-0 podman[281351]: 2026-01-23 10:31:55.268293244 +0000 UTC m=+0.131121263 container attach 1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_germain, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:31:55 compute-0 silly_germain[281367]: 167 167
Jan 23 10:31:55 compute-0 systemd[1]: libpod-1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57.scope: Deactivated successfully.
Jan 23 10:31:55 compute-0 conmon[281367]: conmon 1bd10551389e91636015 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57.scope/container/memory.events
Jan 23 10:31:55 compute-0 podman[281351]: 2026-01-23 10:31:55.274879472 +0000 UTC m=+0.137707471 container died 1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd38b07e6c5b182e22e5bf1b1b1755d9509521395bf380c7a3fdea77086e120a-merged.mount: Deactivated successfully.
Jan 23 10:31:55 compute-0 podman[281351]: 2026-01-23 10:31:55.314970086 +0000 UTC m=+0.177798085 container remove 1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:31:55 compute-0 systemd[1]: libpod-conmon-1bd10551389e916360153e6c78d8089065bd55b80b4c16b1743e678d020c6b57.scope: Deactivated successfully.
Jan 23 10:31:55 compute-0 podman[281390]: 2026-01-23 10:31:55.499824852 +0000 UTC m=+0.052221111 container create 8715c9dc53f28ee112370c161c02f4273a263b9a85975afd5cd2b9154c087a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bell, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 10:31:55 compute-0 systemd[1]: Started libpod-conmon-8715c9dc53f28ee112370c161c02f4273a263b9a85975afd5cd2b9154c087a13.scope.
Jan 23 10:31:55 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:31:55 compute-0 podman[281390]: 2026-01-23 10:31:55.477836515 +0000 UTC m=+0.030232824 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a9d738469518276c2161452ffc1f4f8c4c7624c3e79a10207a2754df0f4f99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a9d738469518276c2161452ffc1f4f8c4c7624c3e79a10207a2754df0f4f99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a9d738469518276c2161452ffc1f4f8c4c7624c3e79a10207a2754df0f4f99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a9d738469518276c2161452ffc1f4f8c4c7624c3e79a10207a2754df0f4f99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a9d738469518276c2161452ffc1f4f8c4c7624c3e79a10207a2754df0f4f99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:55 compute-0 podman[281390]: 2026-01-23 10:31:55.609650345 +0000 UTC m=+0.162046604 container init 8715c9dc53f28ee112370c161c02f4273a263b9a85975afd5cd2b9154c087a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 23 10:31:55 compute-0 podman[281390]: 2026-01-23 10:31:55.615650837 +0000 UTC m=+0.168047096 container start 8715c9dc53f28ee112370c161c02f4273a263b9a85975afd5cd2b9154c087a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 23 10:31:55 compute-0 podman[281390]: 2026-01-23 10:31:55.619330642 +0000 UTC m=+0.171726911 container attach 8715c9dc53f28ee112370c161c02f4273a263b9a85975afd5cd2b9154c087a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bell, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:31:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:55.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:55 compute-0 heuristic_bell[281407]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:31:55 compute-0 heuristic_bell[281407]: --> All data devices are unavailable
Jan 23 10:31:55 compute-0 systemd[1]: libpod-8715c9dc53f28ee112370c161c02f4273a263b9a85975afd5cd2b9154c087a13.scope: Deactivated successfully.
Jan 23 10:31:55 compute-0 podman[281390]: 2026-01-23 10:31:55.98333261 +0000 UTC m=+0.535728869 container died 8715c9dc53f28ee112370c161c02f4273a263b9a85975afd5cd2b9154c087a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a9d738469518276c2161452ffc1f4f8c4c7624c3e79a10207a2754df0f4f99-merged.mount: Deactivated successfully.
Jan 23 10:31:56 compute-0 podman[281390]: 2026-01-23 10:31:56.026835642 +0000 UTC m=+0.579231901 container remove 8715c9dc53f28ee112370c161c02f4273a263b9a85975afd5cd2b9154c087a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_bell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 10:31:56 compute-0 systemd[1]: libpod-conmon-8715c9dc53f28ee112370c161c02f4273a263b9a85975afd5cd2b9154c087a13.scope: Deactivated successfully.
Jan 23 10:31:56 compute-0 sudo[281284]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:56 compute-0 sudo[281435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:31:56 compute-0 sudo[281435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:56 compute-0 sudo[281435]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:56 compute-0 sudo[281460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:31:56 compute-0 sudo[281460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:56 compute-0 ceph-mon[74335]: pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:31:56 compute-0 nova_compute[249229]: 2026-01-23 10:31:56.279 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 801 B/s rd, 0 op/s
Jan 23 10:31:56 compute-0 podman[281527]: 2026-01-23 10:31:56.581272875 +0000 UTC m=+0.024194351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:31:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:31:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:56.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:31:56 compute-0 podman[281527]: 2026-01-23 10:31:56.765876384 +0000 UTC m=+0.208797810 container create 0a7de3c4f62d20f7a3618d478d1831f812990b7bf85212f8d040603f0c566e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_proskuriakova, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 23 10:31:56 compute-0 systemd[1]: Started libpod-conmon-0a7de3c4f62d20f7a3618d478d1831f812990b7bf85212f8d040603f0c566e84.scope.
Jan 23 10:31:56 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:31:56 compute-0 podman[281527]: 2026-01-23 10:31:56.954325932 +0000 UTC m=+0.397247378 container init 0a7de3c4f62d20f7a3618d478d1831f812990b7bf85212f8d040603f0c566e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_proskuriakova, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:31:56 compute-0 podman[281527]: 2026-01-23 10:31:56.961525157 +0000 UTC m=+0.404446583 container start 0a7de3c4f62d20f7a3618d478d1831f812990b7bf85212f8d040603f0c566e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 23 10:31:56 compute-0 podman[281527]: 2026-01-23 10:31:56.965007907 +0000 UTC m=+0.407929343 container attach 0a7de3c4f62d20f7a3618d478d1831f812990b7bf85212f8d040603f0c566e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 23 10:31:56 compute-0 determined_proskuriakova[281544]: 167 167
Jan 23 10:31:56 compute-0 systemd[1]: libpod-0a7de3c4f62d20f7a3618d478d1831f812990b7bf85212f8d040603f0c566e84.scope: Deactivated successfully.
Jan 23 10:31:56 compute-0 podman[281527]: 2026-01-23 10:31:56.966716766 +0000 UTC m=+0.409638212 container died 0a7de3c4f62d20f7a3618d478d1831f812990b7bf85212f8d040603f0c566e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 23 10:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9818e65ebcd5cf6c82a6613450f7c0678caf0cfc7171cc10c9e2cc12d327c1bd-merged.mount: Deactivated successfully.
Jan 23 10:31:57 compute-0 podman[281527]: 2026-01-23 10:31:57.00891416 +0000 UTC m=+0.451835586 container remove 0a7de3c4f62d20f7a3618d478d1831f812990b7bf85212f8d040603f0c566e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:31:57 compute-0 systemd[1]: libpod-conmon-0a7de3c4f62d20f7a3618d478d1831f812990b7bf85212f8d040603f0c566e84.scope: Deactivated successfully.
Jan 23 10:31:57 compute-0 podman[281567]: 2026-01-23 10:31:57.189269677 +0000 UTC m=+0.047286900 container create 5cd805b5b57455be8a8ec6e9eef972d181aa8299f58e27f3a016f79691577ddf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:31:57 compute-0 systemd[1]: Started libpod-conmon-5cd805b5b57455be8a8ec6e9eef972d181aa8299f58e27f3a016f79691577ddf.scope.
Jan 23 10:31:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bafc02719ec724383f55fc59ec87d3ec9e83b64f86aa1489e02c0f2d3cbbf5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bafc02719ec724383f55fc59ec87d3ec9e83b64f86aa1489e02c0f2d3cbbf5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bafc02719ec724383f55fc59ec87d3ec9e83b64f86aa1489e02c0f2d3cbbf5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bafc02719ec724383f55fc59ec87d3ec9e83b64f86aa1489e02c0f2d3cbbf5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:57 compute-0 podman[281567]: 2026-01-23 10:31:57.258063911 +0000 UTC m=+0.116081164 container init 5cd805b5b57455be8a8ec6e9eef972d181aa8299f58e27f3a016f79691577ddf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bohr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:31:57 compute-0 podman[281567]: 2026-01-23 10:31:57.171952163 +0000 UTC m=+0.029969416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:31:57 compute-0 podman[281567]: 2026-01-23 10:31:57.26996509 +0000 UTC m=+0.127982313 container start 5cd805b5b57455be8a8ec6e9eef972d181aa8299f58e27f3a016f79691577ddf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 10:31:57 compute-0 podman[281567]: 2026-01-23 10:31:57.272569184 +0000 UTC m=+0.130586457 container attach 5cd805b5b57455be8a8ec6e9eef972d181aa8299f58e27f3a016f79691577ddf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 23 10:31:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]: {
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:     "1": [
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:         {
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "devices": [
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "/dev/loop3"
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             ],
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "lv_name": "ceph_lv0",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "lv_size": "21470642176",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "name": "ceph_lv0",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "tags": {
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.cluster_name": "ceph",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.crush_device_class": "",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.encrypted": "0",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.osd_id": "1",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.type": "block",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.vdo": "0",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:                 "ceph.with_tpm": "0"
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             },
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "type": "block",
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:             "vg_name": "ceph_vg0"
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:         }
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]:     ]
Jan 23 10:31:57 compute-0 quizzical_bohr[281583]: }
Jan 23 10:31:57 compute-0 systemd[1]: libpod-5cd805b5b57455be8a8ec6e9eef972d181aa8299f58e27f3a016f79691577ddf.scope: Deactivated successfully.
Jan 23 10:31:57 compute-0 podman[281567]: 2026-01-23 10:31:57.588159241 +0000 UTC m=+0.446176464 container died 5cd805b5b57455be8a8ec6e9eef972d181aa8299f58e27f3a016f79691577ddf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bohr, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:31:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bafc02719ec724383f55fc59ec87d3ec9e83b64f86aa1489e02c0f2d3cbbf5b-merged.mount: Deactivated successfully.
Jan 23 10:31:57 compute-0 podman[281567]: 2026-01-23 10:31:57.632713123 +0000 UTC m=+0.490730346 container remove 5cd805b5b57455be8a8ec6e9eef972d181aa8299f58e27f3a016f79691577ddf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 23 10:31:57 compute-0 systemd[1]: libpod-conmon-5cd805b5b57455be8a8ec6e9eef972d181aa8299f58e27f3a016f79691577ddf.scope: Deactivated successfully.
Jan 23 10:31:57 compute-0 sudo[281460]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:57 compute-0 sudo[281603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:31:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:57.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:57 compute-0 sudo[281603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:57 compute-0 sudo[281603]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:57 compute-0 sudo[281628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:31:57 compute-0 sudo[281628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:57.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:58 compute-0 podman[281696]: 2026-01-23 10:31:58.185208241 +0000 UTC m=+0.035423202 container create f1286b6086be43a66fce46fdea939d0b842e28a8e017763af56b8d14bc557d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 10:31:58 compute-0 systemd[1]: Started libpod-conmon-f1286b6086be43a66fce46fdea939d0b842e28a8e017763af56b8d14bc557d40.scope.
Jan 23 10:31:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:31:58 compute-0 podman[281696]: 2026-01-23 10:31:58.245714898 +0000 UTC m=+0.095929869 container init f1286b6086be43a66fce46fdea939d0b842e28a8e017763af56b8d14bc557d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:31:58 compute-0 podman[281696]: 2026-01-23 10:31:58.250886085 +0000 UTC m=+0.101101046 container start f1286b6086be43a66fce46fdea939d0b842e28a8e017763af56b8d14bc557d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:31:58 compute-0 podman[281696]: 2026-01-23 10:31:58.254863629 +0000 UTC m=+0.105078590 container attach f1286b6086be43a66fce46fdea939d0b842e28a8e017763af56b8d14bc557d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 23 10:31:58 compute-0 flamboyant_driscoll[281712]: 167 167
Jan 23 10:31:58 compute-0 systemd[1]: libpod-f1286b6086be43a66fce46fdea939d0b842e28a8e017763af56b8d14bc557d40.scope: Deactivated successfully.
Jan 23 10:31:58 compute-0 podman[281696]: 2026-01-23 10:31:58.259186612 +0000 UTC m=+0.109401573 container died f1286b6086be43a66fce46fdea939d0b842e28a8e017763af56b8d14bc557d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 10:31:58 compute-0 podman[281696]: 2026-01-23 10:31:58.170740808 +0000 UTC m=+0.020955789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1814690fb9c29aacaef1890ed2c79ea67c21336d19ddc01d2ac5ed55b5568f1c-merged.mount: Deactivated successfully.
Jan 23 10:31:58 compute-0 podman[281696]: 2026-01-23 10:31:58.295844508 +0000 UTC m=+0.146059469 container remove f1286b6086be43a66fce46fdea939d0b842e28a8e017763af56b8d14bc557d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 10:31:58 compute-0 systemd[1]: libpod-conmon-f1286b6086be43a66fce46fdea939d0b842e28a8e017763af56b8d14bc557d40.scope: Deactivated successfully.
Jan 23 10:31:58 compute-0 ceph-mon[74335]: pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 801 B/s rd, 0 op/s
Jan 23 10:31:58 compute-0 podman[281735]: 2026-01-23 10:31:58.455011421 +0000 UTC m=+0.042991028 container create 8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_taussig, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:31:58 compute-0 systemd[1]: Started libpod-conmon-8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3.scope.
Jan 23 10:31:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e83374d0715dbbee9ebf5b08822d9bb8c1f1090746e578bd1f1309cb512624/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e83374d0715dbbee9ebf5b08822d9bb8c1f1090746e578bd1f1309cb512624/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e83374d0715dbbee9ebf5b08822d9bb8c1f1090746e578bd1f1309cb512624/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e83374d0715dbbee9ebf5b08822d9bb8c1f1090746e578bd1f1309cb512624/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:31:58 compute-0 podman[281735]: 2026-01-23 10:31:58.434887726 +0000 UTC m=+0.022867343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:31:58 compute-0 podman[281735]: 2026-01-23 10:31:58.531523574 +0000 UTC m=+0.119503201 container init 8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:31:58 compute-0 podman[281735]: 2026-01-23 10:31:58.538513404 +0000 UTC m=+0.126493011 container start 8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_taussig, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 10:31:58 compute-0 podman[281735]: 2026-01-23 10:31:58.542483357 +0000 UTC m=+0.130462964 container attach 8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 10:31:58 compute-0 nova_compute[249229]: 2026-01-23 10:31:58.560 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:31:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:31:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:58.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:31:58.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:31:59 compute-0 lvm[281826]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:31:59 compute-0 lvm[281826]: VG ceph_vg0 finished
Jan 23 10:31:59 compute-0 busy_taussig[281751]: {}
Jan 23 10:31:59 compute-0 systemd[1]: libpod-8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3.scope: Deactivated successfully.
Jan 23 10:31:59 compute-0 systemd[1]: libpod-8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3.scope: Consumed 1.130s CPU time.
Jan 23 10:31:59 compute-0 podman[281830]: 2026-01-23 10:31:59.308158748 +0000 UTC m=+0.025946001 container died 8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 10:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-47e83374d0715dbbee9ebf5b08822d9bb8c1f1090746e578bd1f1309cb512624-merged.mount: Deactivated successfully.
Jan 23 10:31:59 compute-0 ceph-mon[74335]: pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:31:59 compute-0 podman[281830]: 2026-01-23 10:31:59.350865667 +0000 UTC m=+0.068652920 container remove 8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:31:59 compute-0 systemd[1]: libpod-conmon-8f1e5345c49dd066b3b4ef5704b4e471bbb4096594e602745ea6da0970ae5eb3.scope: Deactivated successfully.
Jan 23 10:31:59 compute-0 sudo[281628]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:31:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:31:59 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:31:59 compute-0 sudo[281844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:31:59 compute-0 sudo[281844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:31:59 compute-0 sudo[281844]: pam_unix(sudo:session): session closed for user root
Jan 23 10:31:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:31:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:31:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:59.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:31:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:31:59.788 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:31:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:31:59.789 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:31:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:31:59.790 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:31:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:59] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:31:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:31:59] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:32:00 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:32:00 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:32:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:32:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:00.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:01 compute-0 nova_compute[249229]: 2026-01-23 10:32:01.281 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:01 compute-0 ceph-mon[74335]: pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:32:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:01.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.337918) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164322338278, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2618, "num_deletes": 506, "total_data_size": 4266994, "memory_usage": 4346872, "flush_reason": "Manual Compaction"}
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164322363620, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2686587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31856, "largest_seqno": 34473, "table_properties": {"data_size": 2676761, "index_size": 5040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 31915, "raw_average_key_size": 21, "raw_value_size": 2652031, "raw_average_value_size": 1807, "num_data_blocks": 215, "num_entries": 1467, "num_filter_entries": 1467, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164133, "oldest_key_time": 1769164133, "file_creation_time": 1769164322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 25686 microseconds, and 17023 cpu microseconds.
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.363706) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2686587 bytes OK
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.363745) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.365885) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.365915) EVENT_LOG_v1 {"time_micros": 1769164322365909, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.365936) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4254210, prev total WAL file size 4254210, number of live WAL files 2.
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.367761) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323536' seq:0, type:0; will stop at (end)
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2623KB)], [68(13MB)]
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164322367953, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 17293679, "oldest_snapshot_seqno": -1}
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6856 keys, 14416357 bytes, temperature: kUnknown
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164322465056, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 14416357, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14372071, "index_size": 26062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 176923, "raw_average_key_size": 25, "raw_value_size": 14250332, "raw_average_value_size": 2078, "num_data_blocks": 1044, "num_entries": 6856, "num_filter_entries": 6856, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769164322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.465402) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 14416357 bytes
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.466700) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.0 rd, 148.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 13.9 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 7800, records dropped: 944 output_compression: NoCompression
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.466730) EVENT_LOG_v1 {"time_micros": 1769164322466712, "job": 38, "event": "compaction_finished", "compaction_time_micros": 97181, "compaction_time_cpu_micros": 56691, "output_level": 6, "num_output_files": 1, "total_output_size": 14416357, "num_input_records": 7800, "num_output_records": 6856, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164322467517, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164322470100, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.367537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.470242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.470249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.470252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.470255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:02 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:02.470258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:32:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:02.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:03 compute-0 ceph-mon[74335]: pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:32:03 compute-0 nova_compute[249229]: 2026-01-23 10:32:03.564 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:03.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:03.743Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:32:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:03.743Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:32:04 compute-0 podman[281874]: 2026-01-23 10:32:04.565138908 +0000 UTC m=+0.091322977 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 10:32:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:32:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:04.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:32:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:32:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:05.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:05 compute-0 ceph-mon[74335]: pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 534 B/s rd, 0 op/s
Jan 23 10:32:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:32:06 compute-0 nova_compute[249229]: 2026-01-23 10:32:06.301 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:06.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:07.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:07 compute-0 ceph-mon[74335]: pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:07.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:07 compute-0 sudo[281904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:32:07 compute-0 sudo[281904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:32:07 compute-0 sudo[281904]: pam_unix(sudo:session): session closed for user root
Jan 23 10:32:08 compute-0 nova_compute[249229]: 2026-01-23 10:32:08.615 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:08 compute-0 nova_compute[249229]: 2026-01-23 10:32:08.728 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:32:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:08.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:08.943Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:32:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:08.943Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:32:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:09.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:32:09 compute-0 ceph-mon[74335]: pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:09] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:32:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:09] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:32:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:10.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:11 compute-0 nova_compute[249229]: 2026-01-23 10:32:11.304 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:11.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:11 compute-0 ceph-mon[74335]: pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:12.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:13 compute-0 podman[281934]: 2026-01-23 10:32:13.527490766 +0000 UTC m=+0.052939041 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 10:32:13 compute-0 nova_compute[249229]: 2026-01-23 10:32:13.617 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:13 compute-0 nova_compute[249229]: 2026-01-23 10:32:13.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:32:13 compute-0 nova_compute[249229]: 2026-01-23 10:32:13.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:32:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:13.745Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:13.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:13 compute-0 ceph-mon[74335]: pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:14 compute-0 nova_compute[249229]: 2026-01-23 10:32:14.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:32:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:32:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:14.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.743 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.744 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:32:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:15.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.769 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.770 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.770 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.770 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:32:15 compute-0 nova_compute[249229]: 2026-01-23 10:32:15.771 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:32:15 compute-0 ceph-mon[74335]: pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:16 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:32:16 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115399558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.258 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.307 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.435 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.437 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4484MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.437 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.438 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:32:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.706 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.707 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:32:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:16.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.827 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing inventories for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 10:32:16 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3115399558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.874 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating ProviderTree inventory for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.875 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.933 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing aggregate associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 10:32:16 compute-0 nova_compute[249229]: 2026-01-23 10:32:16.988 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing trait associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 10:32:17 compute-0 nova_compute[249229]: 2026-01-23 10:32:17.029 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:32:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:32:17 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359406760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:32:17 compute-0 nova_compute[249229]: 2026-01-23 10:32:17.564 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:32:17 compute-0 nova_compute[249229]: 2026-01-23 10:32:17.570 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:32:17 compute-0 nova_compute[249229]: 2026-01-23 10:32:17.615 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:32:17 compute-0 nova_compute[249229]: 2026-01-23 10:32:17.617 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:32:17 compute-0 nova_compute[249229]: 2026-01-23 10:32:17.617 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:32:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:17.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:17.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:32:17 compute-0 ceph-mon[74335]: pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3707941147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:32:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1685129570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:32:17 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3359406760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.884866) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164337884908, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 382, "num_deletes": 251, "total_data_size": 303804, "memory_usage": 310888, "flush_reason": "Manual Compaction"}
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164337888568, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 298037, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34474, "largest_seqno": 34855, "table_properties": {"data_size": 295738, "index_size": 463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5740, "raw_average_key_size": 18, "raw_value_size": 291160, "raw_average_value_size": 948, "num_data_blocks": 20, "num_entries": 307, "num_filter_entries": 307, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164323, "oldest_key_time": 1769164323, "file_creation_time": 1769164337, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 3738 microseconds, and 1400 cpu microseconds.
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.888607) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 298037 bytes OK
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.888623) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.889695) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.889707) EVENT_LOG_v1 {"time_micros": 1769164337889703, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.889724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 301347, prev total WAL file size 301347, number of live WAL files 2.
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.890114) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(291KB)], [71(13MB)]
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164337890138, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 14714394, "oldest_snapshot_seqno": -1}
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6653 keys, 12553709 bytes, temperature: kUnknown
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164337949260, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 12553709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12512199, "index_size": 23798, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 173409, "raw_average_key_size": 26, "raw_value_size": 12395409, "raw_average_value_size": 1863, "num_data_blocks": 943, "num_entries": 6653, "num_filter_entries": 6653, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769164337, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.949718) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 12553709 bytes
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.951033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 248.0 rd, 211.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(91.5) write-amplify(42.1) OK, records in: 7163, records dropped: 510 output_compression: NoCompression
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.951058) EVENT_LOG_v1 {"time_micros": 1769164337951046, "job": 40, "event": "compaction_finished", "compaction_time_micros": 59328, "compaction_time_cpu_micros": 27391, "output_level": 6, "num_output_files": 1, "total_output_size": 12553709, "num_input_records": 7163, "num_output_records": 6653, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164337951334, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164337954498, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.890031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.954584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.954589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.954591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.954593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:17 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:32:17.954594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:32:18 compute-0 nova_compute[249229]: 2026-01-23 10:32:18.620 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:18.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:18.945Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4119112767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:32:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/204262838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:32:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:19.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:19] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:32:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:19] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:32:20 compute-0 ceph-mon[74335]: pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:32:20
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', 'volumes', 'vms', '.nfs', 'images']
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:32:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:32:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:32:20 compute-0 nova_compute[249229]: 2026-01-23 10:32:20.590 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:32:20 compute-0 nova_compute[249229]: 2026-01-23 10:32:20.591 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:32:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:20.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:32:21 compute-0 nova_compute[249229]: 2026-01-23 10:32:21.310 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:21 compute-0 nova_compute[249229]: 2026-01-23 10:32:21.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:32:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:21.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:22 compute-0 ceph-mon[74335]: pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:22.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:23 compute-0 nova_compute[249229]: 2026-01-23 10:32:23.623 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:23.745Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:23.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:24 compute-0 ceph-mon[74335]: pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:24 compute-0 nova_compute[249229]: 2026-01-23 10:32:24.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:32:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:32:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:24.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:32:25 compute-0 ceph-mon[74335]: pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:25.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:26 compute-0 nova_compute[249229]: 2026-01-23 10:32:26.312 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:26.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:27.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:27 compute-0 ceph-mon[74335]: pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:27.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:28 compute-0 sudo[282012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:32:28 compute-0 sudo[282012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:32:28 compute-0 sudo[282012]: pam_unix(sudo:session): session closed for user root
Jan 23 10:32:28 compute-0 nova_compute[249229]: 2026-01-23 10:32:28.624 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:28.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:28.946Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:29.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:29] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:32:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:29] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:32:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:30.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:31 compute-0 ceph-mon[74335]: pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:31 compute-0 nova_compute[249229]: 2026-01-23 10:32:31.366 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:31.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:32 compute-0 ceph-mon[74335]: pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:32.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:33 compute-0 nova_compute[249229]: 2026-01-23 10:32:33.626 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:33.747Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:33.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:34 compute-0 ceph-mon[74335]: pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:32:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:34.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:32:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:32:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:32:35 compute-0 podman[282044]: 2026-01-23 10:32:35.537533301 +0000 UTC m=+0.072936083 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 10:32:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:35.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:36 compute-0 ceph-mon[74335]: pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:32:36 compute-0 nova_compute[249229]: 2026-01-23 10:32:36.368 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 23 10:32:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:36.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:37 compute-0 ceph-mon[74335]: pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 23 10:32:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:32:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:37.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:32:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:37.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:38 compute-0 nova_compute[249229]: 2026-01-23 10:32:38.628 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 23 10:32:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:38.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:38.947Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:39.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:39] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:32:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:39] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:32:40 compute-0 ceph-mon[74335]: pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 23 10:32:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 23 10:32:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:40.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:41 compute-0 nova_compute[249229]: 2026-01-23 10:32:41.412 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:41.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Jan 23 10:32:42 compute-0 ceph-mon[74335]: pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 23 10:32:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:42.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:43 compute-0 nova_compute[249229]: 2026-01-23 10:32:43.668 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:43.748Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:43.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:44 compute-0 ceph-mon[74335]: pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Jan 23 10:32:44 compute-0 podman[282077]: 2026-01-23 10:32:44.515876088 +0000 UTC m=+0.049883655 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 10:32:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Jan 23 10:32:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:44.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:45.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:45 compute-0 ceph-mon[74335]: pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Jan 23 10:32:46 compute-0 nova_compute[249229]: 2026-01-23 10:32:46.415 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 0 B/s wr, 147 op/s
Jan 23 10:32:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:46.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:47.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:48 compute-0 sudo[282098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:32:48 compute-0 sudo[282098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:32:48 compute-0 ceph-mon[74335]: pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 0 B/s wr, 147 op/s
Jan 23 10:32:48 compute-0 sudo[282098]: pam_unix(sudo:session): session closed for user root
Jan 23 10:32:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Jan 23 10:32:48 compute-0 nova_compute[249229]: 2026-01-23 10:32:48.671 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:48.948Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:49 compute-0 ceph-mon[74335]: pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Jan 23 10:32:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:49.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:49] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:32:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:49] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:32:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:32:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:32:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:32:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:32:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:32:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:32:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:32:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:32:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:32:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Jan 23 10:32:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:50.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:51 compute-0 nova_compute[249229]: 2026-01-23 10:32:51.418 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:51 compute-0 ceph-mon[74335]: pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Jan 23 10:32:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:51.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 0 B/s wr, 155 op/s
Jan 23 10:32:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:52.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:53 compute-0 nova_compute[249229]: 2026-01-23 10:32:53.673 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:53.748Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:53.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 23 10:32:54 compute-0 ceph-mon[74335]: pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 0 B/s wr, 155 op/s
Jan 23 10:32:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:32:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:54.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:32:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:55.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:56 compute-0 ceph-mon[74335]: pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 23 10:32:56 compute-0 nova_compute[249229]: 2026-01-23 10:32:56.421 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 0 B/s wr, 105 op/s
Jan 23 10:32:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:32:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:56.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:32:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:32:57 compute-0 ceph-mon[74335]: pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 0 B/s wr, 105 op/s
Jan 23 10:32:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:57.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:57.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 23 10:32:58 compute-0 nova_compute[249229]: 2026-01-23 10:32:58.675 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:32:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:58.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:32:58.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:32:59 compute-0 sudo[282134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:32:59 compute-0 sudo[282134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:32:59 compute-0 sudo[282134]: pam_unix(sudo:session): session closed for user root
Jan 23 10:32:59 compute-0 sudo[282159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:32:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:32:59.788 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:32:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:32:59.788 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:32:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:32:59.789 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:32:59 compute-0 sudo[282159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:32:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:32:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:32:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:59.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:32:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:59] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Jan 23 10:32:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:32:59] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Jan 23 10:33:00 compute-0 ceph-mon[74335]: pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 23 10:33:00 compute-0 sudo[282159]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:33:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:33:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:33:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:33:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 23 10:33:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:33:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:33:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:33:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:33:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:33:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:33:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:33:00 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:33:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:33:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:33:00 compute-0 sudo[282219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:33:00 compute-0 sudo[282219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:00 compute-0 sudo[282219]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:00 compute-0 sudo[282244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:33:00 compute-0 sudo[282244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:33:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:00.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:33:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:33:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:33:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:33:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:33:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:33:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:33:01 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:33:01 compute-0 podman[282312]: 2026-01-23 10:33:01.142045664 +0000 UTC m=+0.045413040 container create 995a0cafa0c955a5fc9cfc37e3024f7804847a057458b786a93943dfc22c35b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:33:01 compute-0 systemd[1]: Started libpod-conmon-995a0cafa0c955a5fc9cfc37e3024f7804847a057458b786a93943dfc22c35b2.scope.
Jan 23 10:33:01 compute-0 podman[282312]: 2026-01-23 10:33:01.119385311 +0000 UTC m=+0.022752707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:33:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:33:01 compute-0 podman[282312]: 2026-01-23 10:33:01.231303108 +0000 UTC m=+0.134670504 container init 995a0cafa0c955a5fc9cfc37e3024f7804847a057458b786a93943dfc22c35b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shirley, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:33:01 compute-0 podman[282312]: 2026-01-23 10:33:01.24159662 +0000 UTC m=+0.144963996 container start 995a0cafa0c955a5fc9cfc37e3024f7804847a057458b786a93943dfc22c35b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shirley, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 10:33:01 compute-0 podman[282312]: 2026-01-23 10:33:01.245664816 +0000 UTC m=+0.149032212 container attach 995a0cafa0c955a5fc9cfc37e3024f7804847a057458b786a93943dfc22c35b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 23 10:33:01 compute-0 intelligent_shirley[282329]: 167 167
Jan 23 10:33:01 compute-0 systemd[1]: libpod-995a0cafa0c955a5fc9cfc37e3024f7804847a057458b786a93943dfc22c35b2.scope: Deactivated successfully.
Jan 23 10:33:01 compute-0 podman[282312]: 2026-01-23 10:33:01.249141385 +0000 UTC m=+0.152508791 container died 995a0cafa0c955a5fc9cfc37e3024f7804847a057458b786a93943dfc22c35b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shirley, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b6fc41ca649ca97b55ceb1322bb96f7d392dbb4d41057b2f1ccde9e38d4d1e2-merged.mount: Deactivated successfully.
Jan 23 10:33:01 compute-0 podman[282312]: 2026-01-23 10:33:01.288824341 +0000 UTC m=+0.192191717 container remove 995a0cafa0c955a5fc9cfc37e3024f7804847a057458b786a93943dfc22c35b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shirley, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:33:01 compute-0 systemd[1]: libpod-conmon-995a0cafa0c955a5fc9cfc37e3024f7804847a057458b786a93943dfc22c35b2.scope: Deactivated successfully.
Jan 23 10:33:01 compute-0 nova_compute[249229]: 2026-01-23 10:33:01.424 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:01 compute-0 podman[282354]: 2026-01-23 10:33:01.45680229 +0000 UTC m=+0.051002449 container create d0bf8007a8d5f62d4fcc359e9607a94ae78f7e2758d9f652343bea8fbc6bbdab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Jan 23 10:33:01 compute-0 systemd[1]: Started libpod-conmon-d0bf8007a8d5f62d4fcc359e9607a94ae78f7e2758d9f652343bea8fbc6bbdab.scope.
Jan 23 10:33:01 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:33:01 compute-0 podman[282354]: 2026-01-23 10:33:01.437581565 +0000 UTC m=+0.031781744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d8a0f7a8b0b3678f7d6e14aaadcd12b063465851cecdced8a1d36c23101bbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d8a0f7a8b0b3678f7d6e14aaadcd12b063465851cecdced8a1d36c23101bbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d8a0f7a8b0b3678f7d6e14aaadcd12b063465851cecdced8a1d36c23101bbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d8a0f7a8b0b3678f7d6e14aaadcd12b063465851cecdced8a1d36c23101bbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d8a0f7a8b0b3678f7d6e14aaadcd12b063465851cecdced8a1d36c23101bbe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:01 compute-0 podman[282354]: 2026-01-23 10:33:01.545061916 +0000 UTC m=+0.139262095 container init d0bf8007a8d5f62d4fcc359e9607a94ae78f7e2758d9f652343bea8fbc6bbdab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:33:01 compute-0 podman[282354]: 2026-01-23 10:33:01.552414265 +0000 UTC m=+0.146614424 container start d0bf8007a8d5f62d4fcc359e9607a94ae78f7e2758d9f652343bea8fbc6bbdab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lovelace, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:33:01 compute-0 podman[282354]: 2026-01-23 10:33:01.556991665 +0000 UTC m=+0.151191844 container attach d0bf8007a8d5f62d4fcc359e9607a94ae78f7e2758d9f652343bea8fbc6bbdab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lovelace, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:33:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:01.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:01 compute-0 heuristic_lovelace[282370]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:33:01 compute-0 heuristic_lovelace[282370]: --> All data devices are unavailable
Jan 23 10:33:01 compute-0 systemd[1]: libpod-d0bf8007a8d5f62d4fcc359e9607a94ae78f7e2758d9f652343bea8fbc6bbdab.scope: Deactivated successfully.
Jan 23 10:33:01 compute-0 podman[282386]: 2026-01-23 10:33:01.935599954 +0000 UTC m=+0.022852400 container died d0bf8007a8d5f62d4fcc359e9607a94ae78f7e2758d9f652343bea8fbc6bbdab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lovelace, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-57d8a0f7a8b0b3678f7d6e14aaadcd12b063465851cecdced8a1d36c23101bbe-merged.mount: Deactivated successfully.
Jan 23 10:33:01 compute-0 podman[282386]: 2026-01-23 10:33:01.976509405 +0000 UTC m=+0.063761821 container remove d0bf8007a8d5f62d4fcc359e9607a94ae78f7e2758d9f652343bea8fbc6bbdab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:33:01 compute-0 systemd[1]: libpod-conmon-d0bf8007a8d5f62d4fcc359e9607a94ae78f7e2758d9f652343bea8fbc6bbdab.scope: Deactivated successfully.
Jan 23 10:33:02 compute-0 sudo[282244]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:02 compute-0 sudo[282401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:33:02 compute-0 sudo[282401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:02 compute-0 sudo[282401]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:02 compute-0 ceph-mon[74335]: pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 23 10:33:02 compute-0 sudo[282426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:33:02 compute-0 sudo[282426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Jan 23 10:33:02 compute-0 podman[282492]: 2026-01-23 10:33:02.50360081 +0000 UTC m=+0.037835475 container create 30892dbe847c53aa91b9ac3eae1e53ec989e7936fd18250d72350adf885b9e33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_villani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Jan 23 10:33:02 compute-0 systemd[1]: Started libpod-conmon-30892dbe847c53aa91b9ac3eae1e53ec989e7936fd18250d72350adf885b9e33.scope.
Jan 23 10:33:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:33:02 compute-0 podman[282492]: 2026-01-23 10:33:02.567520745 +0000 UTC m=+0.101755430 container init 30892dbe847c53aa91b9ac3eae1e53ec989e7936fd18250d72350adf885b9e33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_villani, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 23 10:33:02 compute-0 podman[282492]: 2026-01-23 10:33:02.574293107 +0000 UTC m=+0.108527772 container start 30892dbe847c53aa91b9ac3eae1e53ec989e7936fd18250d72350adf885b9e33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:33:02 compute-0 podman[282492]: 2026-01-23 10:33:02.578299131 +0000 UTC m=+0.112533846 container attach 30892dbe847c53aa91b9ac3eae1e53ec989e7936fd18250d72350adf885b9e33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 10:33:02 compute-0 vigorous_villani[282508]: 167 167
Jan 23 10:33:02 compute-0 systemd[1]: libpod-30892dbe847c53aa91b9ac3eae1e53ec989e7936fd18250d72350adf885b9e33.scope: Deactivated successfully.
Jan 23 10:33:02 compute-0 podman[282492]: 2026-01-23 10:33:02.580539434 +0000 UTC m=+0.114774099 container died 30892dbe847c53aa91b9ac3eae1e53ec989e7936fd18250d72350adf885b9e33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 10:33:02 compute-0 podman[282492]: 2026-01-23 10:33:02.487634667 +0000 UTC m=+0.021869352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-93b03a0b4a91f97e9b182c37752f417e435fd3bea6a71444314fcdf67e7195a9-merged.mount: Deactivated successfully.
Jan 23 10:33:02 compute-0 podman[282492]: 2026-01-23 10:33:02.614105797 +0000 UTC m=+0.148340462 container remove 30892dbe847c53aa91b9ac3eae1e53ec989e7936fd18250d72350adf885b9e33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_villani, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:33:02 compute-0 systemd[1]: libpod-conmon-30892dbe847c53aa91b9ac3eae1e53ec989e7936fd18250d72350adf885b9e33.scope: Deactivated successfully.
Jan 23 10:33:02 compute-0 podman[282535]: 2026-01-23 10:33:02.772957327 +0000 UTC m=+0.037163926 container create 42f6f35facb52fa2f0b9eceeae1485a29bd5f1f8a8351c48c2ff0ad3ca47ac38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bell, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 23 10:33:02 compute-0 systemd[1]: Started libpod-conmon-42f6f35facb52fa2f0b9eceeae1485a29bd5f1f8a8351c48c2ff0ad3ca47ac38.scope.
Jan 23 10:33:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:02.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a27158cc62a74c10bf7832f93fd07ec3e36d3a2251a36473c5df273e309eb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a27158cc62a74c10bf7832f93fd07ec3e36d3a2251a36473c5df273e309eb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a27158cc62a74c10bf7832f93fd07ec3e36d3a2251a36473c5df273e309eb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a27158cc62a74c10bf7832f93fd07ec3e36d3a2251a36473c5df273e309eb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:02 compute-0 podman[282535]: 2026-01-23 10:33:02.757332824 +0000 UTC m=+0.021539453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:33:02 compute-0 podman[282535]: 2026-01-23 10:33:02.856805148 +0000 UTC m=+0.121011757 container init 42f6f35facb52fa2f0b9eceeae1485a29bd5f1f8a8351c48c2ff0ad3ca47ac38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bell, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:33:02 compute-0 podman[282535]: 2026-01-23 10:33:02.863156258 +0000 UTC m=+0.127362867 container start 42f6f35facb52fa2f0b9eceeae1485a29bd5f1f8a8351c48c2ff0ad3ca47ac38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:33:02 compute-0 podman[282535]: 2026-01-23 10:33:02.866871173 +0000 UTC m=+0.131077852 container attach 42f6f35facb52fa2f0b9eceeae1485a29bd5f1f8a8351c48c2ff0ad3ca47ac38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:33:03 compute-0 frosty_bell[282551]: {
Jan 23 10:33:03 compute-0 frosty_bell[282551]:     "1": [
Jan 23 10:33:03 compute-0 frosty_bell[282551]:         {
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "devices": [
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "/dev/loop3"
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             ],
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "lv_name": "ceph_lv0",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "lv_size": "21470642176",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "name": "ceph_lv0",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "tags": {
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.cluster_name": "ceph",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.crush_device_class": "",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.encrypted": "0",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.osd_id": "1",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.type": "block",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.vdo": "0",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:                 "ceph.with_tpm": "0"
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             },
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "type": "block",
Jan 23 10:33:03 compute-0 frosty_bell[282551]:             "vg_name": "ceph_vg0"
Jan 23 10:33:03 compute-0 frosty_bell[282551]:         }
Jan 23 10:33:03 compute-0 frosty_bell[282551]:     ]
Jan 23 10:33:03 compute-0 frosty_bell[282551]: }
Jan 23 10:33:03 compute-0 systemd[1]: libpod-42f6f35facb52fa2f0b9eceeae1485a29bd5f1f8a8351c48c2ff0ad3ca47ac38.scope: Deactivated successfully.
Jan 23 10:33:03 compute-0 podman[282535]: 2026-01-23 10:33:03.139853364 +0000 UTC m=+0.404059983 container died 42f6f35facb52fa2f0b9eceeae1485a29bd5f1f8a8351c48c2ff0ad3ca47ac38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 23 10:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7a27158cc62a74c10bf7832f93fd07ec3e36d3a2251a36473c5df273e309eb4-merged.mount: Deactivated successfully.
Jan 23 10:33:03 compute-0 podman[282535]: 2026-01-23 10:33:03.180643832 +0000 UTC m=+0.444850441 container remove 42f6f35facb52fa2f0b9eceeae1485a29bd5f1f8a8351c48c2ff0ad3ca47ac38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:33:03 compute-0 systemd[1]: libpod-conmon-42f6f35facb52fa2f0b9eceeae1485a29bd5f1f8a8351c48c2ff0ad3ca47ac38.scope: Deactivated successfully.
Jan 23 10:33:03 compute-0 sudo[282426]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:03 compute-0 sudo[282571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:33:03 compute-0 sudo[282571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:03 compute-0 sudo[282571]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:03 compute-0 sudo[282596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:33:03 compute-0 sudo[282596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:03 compute-0 nova_compute[249229]: 2026-01-23 10:33:03.676 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:03 compute-0 podman[282663]: 2026-01-23 10:33:03.721417054 +0000 UTC m=+0.049851387 container create 179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:33:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:03.749Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:03 compute-0 systemd[1]: Started libpod-conmon-179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5.scope.
Jan 23 10:33:03 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:33:03 compute-0 podman[282663]: 2026-01-23 10:33:03.699329387 +0000 UTC m=+0.027763740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:33:03 compute-0 podman[282663]: 2026-01-23 10:33:03.796807044 +0000 UTC m=+0.125241397 container init 179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_dubinsky, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:33:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:03.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:03 compute-0 podman[282663]: 2026-01-23 10:33:03.803916836 +0000 UTC m=+0.132351169 container start 179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:33:03 compute-0 podman[282663]: 2026-01-23 10:33:03.807018894 +0000 UTC m=+0.135453227 container attach 179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:33:03 compute-0 nifty_dubinsky[282679]: 167 167
Jan 23 10:33:03 compute-0 systemd[1]: libpod-179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5.scope: Deactivated successfully.
Jan 23 10:33:03 compute-0 conmon[282679]: conmon 179fc95d823a084145fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5.scope/container/memory.events
Jan 23 10:33:03 compute-0 podman[282663]: 2026-01-23 10:33:03.809943367 +0000 UTC m=+0.138377700 container died 179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 23 10:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0768efb8ce11668b168c6c08b467ce52591dedbb8c50928c29f628644640b2a7-merged.mount: Deactivated successfully.
Jan 23 10:33:03 compute-0 podman[282663]: 2026-01-23 10:33:03.847702659 +0000 UTC m=+0.176136992 container remove 179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_dubinsky, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:33:03 compute-0 systemd[1]: libpod-conmon-179fc95d823a084145fd53fe43e62d919a44e077085f15aa10bd28545c6427e5.scope: Deactivated successfully.
Jan 23 10:33:04 compute-0 podman[282703]: 2026-01-23 10:33:04.008156065 +0000 UTC m=+0.040566403 container create 5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:33:04 compute-0 systemd[1]: Started libpod-conmon-5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0.scope.
Jan 23 10:33:04 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f9126e488fe703c3a905ef2e1ca9133a1152e9e69fa0bd7c431059e69bfc3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:04 compute-0 podman[282703]: 2026-01-23 10:33:03.991139731 +0000 UTC m=+0.023550089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f9126e488fe703c3a905ef2e1ca9133a1152e9e69fa0bd7c431059e69bfc3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f9126e488fe703c3a905ef2e1ca9133a1152e9e69fa0bd7c431059e69bfc3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f9126e488fe703c3a905ef2e1ca9133a1152e9e69fa0bd7c431059e69bfc3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:33:04 compute-0 podman[282703]: 2026-01-23 10:33:04.095472844 +0000 UTC m=+0.127883182 container init 5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:33:04 compute-0 podman[282703]: 2026-01-23 10:33:04.104032957 +0000 UTC m=+0.136443295 container start 5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 10:33:04 compute-0 podman[282703]: 2026-01-23 10:33:04.107720331 +0000 UTC m=+0.140130679 container attach 5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 10:33:04 compute-0 ceph-mon[74335]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Jan 23 10:33:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 522 B/s rd, 0 op/s
Jan 23 10:33:04 compute-0 lvm[282794]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:33:04 compute-0 lvm[282794]: VG ceph_vg0 finished
Jan 23 10:33:04 compute-0 sleepy_newton[282719]: {}
Jan 23 10:33:04 compute-0 systemd[1]: libpod-5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0.scope: Deactivated successfully.
Jan 23 10:33:04 compute-0 podman[282703]: 2026-01-23 10:33:04.768228114 +0000 UTC m=+0.800638452 container died 5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:33:04 compute-0 systemd[1]: libpod-5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0.scope: Consumed 1.066s CPU time.
Jan 23 10:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-42f9126e488fe703c3a905ef2e1ca9133a1152e9e69fa0bd7c431059e69bfc3a-merged.mount: Deactivated successfully.
Jan 23 10:33:04 compute-0 podman[282703]: 2026-01-23 10:33:04.810239636 +0000 UTC m=+0.842649974 container remove 5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:33:04 compute-0 systemd[1]: libpod-conmon-5a1f7f82b992a42a567afa40c80be12e3399dbc03abb76a518456aabd295afe0.scope: Deactivated successfully.
Jan 23 10:33:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:04.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:04 compute-0 sudo[282596]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:33:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:33:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:33:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:33:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:33:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:33:05 compute-0 sudo[282809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:33:05 compute-0 sudo[282809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:05 compute-0 sudo[282809]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:05.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:05 compute-0 ceph-mon[74335]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 522 B/s rd, 0 op/s
Jan 23 10:33:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:33:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:33:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:33:06 compute-0 nova_compute[249229]: 2026-01-23 10:33:06.427 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 783 B/s rd, 0 op/s
Jan 23 10:33:06 compute-0 podman[282835]: 2026-01-23 10:33:06.580375962 +0000 UTC m=+0.102151551 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 10:33:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:06.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.460882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164387460988, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 684, "num_deletes": 251, "total_data_size": 1000076, "memory_usage": 1014296, "flush_reason": "Manual Compaction"}
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164387528976, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 980518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34856, "largest_seqno": 35539, "table_properties": {"data_size": 976921, "index_size": 1441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7221, "raw_average_key_size": 17, "raw_value_size": 969770, "raw_average_value_size": 2298, "num_data_blocks": 62, "num_entries": 422, "num_filter_entries": 422, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164338, "oldest_key_time": 1769164338, "file_creation_time": 1769164387, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 68152 microseconds, and 3379 cpu microseconds.
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.529037) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 980518 bytes OK
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.529059) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.578669) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.578727) EVENT_LOG_v1 {"time_micros": 1769164387578715, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.578754) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 996573, prev total WAL file size 996573, number of live WAL files 2.
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.579447) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353033' seq:0, type:0; will stop at (end)
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(957KB)], [74(11MB)]
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164387579558, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 13534227, "oldest_snapshot_seqno": -1}
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6559 keys, 12131367 bytes, temperature: kUnknown
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164387694628, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12131367, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12090472, "index_size": 23375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 173129, "raw_average_key_size": 26, "raw_value_size": 11975099, "raw_average_value_size": 1825, "num_data_blocks": 913, "num_entries": 6559, "num_filter_entries": 6559, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769164387, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.694881) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12131367 bytes
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.706415) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.5 rd, 105.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(26.2) write-amplify(12.4) OK, records in: 7075, records dropped: 516 output_compression: NoCompression
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.706451) EVENT_LOG_v1 {"time_micros": 1769164387706438, "job": 42, "event": "compaction_finished", "compaction_time_micros": 115159, "compaction_time_cpu_micros": 26170, "output_level": 6, "num_output_files": 1, "total_output_size": 12131367, "num_input_records": 7075, "num_output_records": 6559, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164387706882, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164387709982, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.579335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.710056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.710063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.710065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.710067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:33:07 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:33:07.710069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:33:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:07.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:07.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:33:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:07.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:33:08 compute-0 ceph-mon[74335]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 783 B/s rd, 0 op/s
Jan 23 10:33:08 compute-0 sudo[282864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:33:08 compute-0 sudo[282864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:08 compute-0 sudo[282864]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 522 B/s rd, 0 op/s
Jan 23 10:33:08 compute-0 nova_compute[249229]: 2026-01-23 10:33:08.677 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:08.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:08.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:09 compute-0 nova_compute[249229]: 2026-01-23 10:33:09.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:09.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:33:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:33:10 compute-0 ceph-mon[74335]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 522 B/s rd, 0 op/s
Jan 23 10:33:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 522 B/s rd, 0 op/s
Jan 23 10:33:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:10.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:11 compute-0 nova_compute[249229]: 2026-01-23 10:33:11.431 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:11.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:12 compute-0 ceph-mon[74335]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 522 B/s rd, 0 op/s
Jan 23 10:33:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 764 B/s rd, 0 op/s
Jan 23 10:33:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:12.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:13 compute-0 nova_compute[249229]: 2026-01-23 10:33:13.680 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:13.750Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:33:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:13.751Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:14 compute-0 ceph-mon[74335]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 764 B/s rd, 0 op/s
Jan 23 10:33:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 509 B/s rd, 0 op/s
Jan 23 10:33:14 compute-0 nova_compute[249229]: 2026-01-23 10:33:14.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:14 compute-0 nova_compute[249229]: 2026-01-23 10:33:14.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:33:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:14.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:15 compute-0 ceph-mon[74335]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 509 B/s rd, 0 op/s
Jan 23 10:33:15 compute-0 podman[282896]: 2026-01-23 10:33:15.528177406 +0000 UTC m=+0.056600058 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 10:33:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:16 compute-0 nova_compute[249229]: 2026-01-23 10:33:16.435 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 764 B/s rd, 0 op/s
Jan 23 10:33:16 compute-0 nova_compute[249229]: 2026-01-23 10:33:16.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:16 compute-0 nova_compute[249229]: 2026-01-23 10:33:16.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:33:16 compute-0 nova_compute[249229]: 2026-01-23 10:33:16.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:33:16 compute-0 nova_compute[249229]: 2026-01-23 10:33:16.743 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:33:16 compute-0 nova_compute[249229]: 2026-01-23 10:33:16.744 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:16.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:17 compute-0 ceph-mon[74335]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 764 B/s rd, 0 op/s
Jan 23 10:33:17 compute-0 nova_compute[249229]: 2026-01-23 10:33:17.715 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:17.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:17.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.014 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.014 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.014 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.014 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.015 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:33:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:33:18 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4059943484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:33:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 509 B/s rd, 0 op/s
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.511 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:33:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1310870223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:33:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/944075048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:33:18 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4059943484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.674 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.675 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4498MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.675 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.675 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.681 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.800 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.800 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:33:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:18.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:18 compute-0 nova_compute[249229]: 2026-01-23 10:33:18.866 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:33:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:18.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:33:19 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727738214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:33:19 compute-0 nova_compute[249229]: 2026-01-23 10:33:19.339 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:33:19 compute-0 nova_compute[249229]: 2026-01-23 10:33:19.344 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:33:19 compute-0 nova_compute[249229]: 2026-01-23 10:33:19.380 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:33:19 compute-0 nova_compute[249229]: 2026-01-23 10:33:19.382 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:33:19 compute-0 nova_compute[249229]: 2026-01-23 10:33:19.382 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:33:19 compute-0 ceph-mon[74335]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 509 B/s rd, 0 op/s
Jan 23 10:33:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1727738214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:33:19 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2868338412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:33:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:19.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:33:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:33:20
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.nfs', 'backups']
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:33:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:33:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:33:20 compute-0 nova_compute[249229]: 2026-01-23 10:33:20.375 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:20 compute-0 nova_compute[249229]: 2026-01-23 10:33:20.397 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 509 B/s rd, 0 op/s
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:33:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:33:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3715483574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:33:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:33:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:20.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:21 compute-0 nova_compute[249229]: 2026-01-23 10:33:21.440 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:21 compute-0 nova_compute[249229]: 2026-01-23 10:33:21.729 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:21.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:21 compute-0 ceph-mon[74335]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 509 B/s rd, 0 op/s
Jan 23 10:33:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 764 B/s rd, 0 op/s
Jan 23 10:33:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:22.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:23 compute-0 nova_compute[249229]: 2026-01-23 10:33:23.684 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:23 compute-0 nova_compute[249229]: 2026-01-23 10:33:23.715 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:23.753Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:23.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:23 compute-0 ceph-mon[74335]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 764 B/s rd, 0 op/s
Jan 23 10:33:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:25 compute-0 nova_compute[249229]: 2026-01-23 10:33:25.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:33:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:25 compute-0 ceph-mon[74335]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:26 compute-0 nova_compute[249229]: 2026-01-23 10:33:26.443 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:26.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:27.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:27.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:27 compute-0 ceph-mon[74335]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:28 compute-0 sudo[282973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:33:28 compute-0 sudo[282973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:28 compute-0 sudo[282973]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:28 compute-0 nova_compute[249229]: 2026-01-23 10:33:28.685 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:28.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:28.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:33:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:33:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:29] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Jan 23 10:33:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:29] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Jan 23 10:33:30 compute-0 ceph-mon[74335]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:30.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:31 compute-0 nova_compute[249229]: 2026-01-23 10:33:31.446 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:32 compute-0 ceph-mon[74335]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:33:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:32.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:33:33 compute-0 nova_compute[249229]: 2026-01-23 10:33:33.687 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:33.754Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:33:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:33.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:33:34 compute-0 ceph-mon[74335]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:34.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:33:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:33:35 compute-0 ceph-mon[74335]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:33:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:35.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:36 compute-0 nova_compute[249229]: 2026-01-23 10:33:36.450 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:36.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:37 compute-0 podman[283007]: 2026-01-23 10:33:37.560694786 +0000 UTC m=+0.085533429 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 23 10:33:37 compute-0 ceph-mon[74335]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:37.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:37.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:38 compute-0 nova_compute[249229]: 2026-01-23 10:33:38.737 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:38.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:38.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:33:39 compute-0 ceph-mon[74335]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:39.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:39] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:33:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:39] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:33:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:41 compute-0 nova_compute[249229]: 2026-01-23 10:33:41.454 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:41 compute-0 ceph-mon[74335]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:33:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:33:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:42.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:43 compute-0 nova_compute[249229]: 2026-01-23 10:33:43.740 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:43.755Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:44 compute-0 ceph-mon[74335]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:33:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:44.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:33:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:45.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:46 compute-0 ceph-mon[74335]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:46 compute-0 nova_compute[249229]: 2026-01-23 10:33:46.458 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:46 compute-0 podman[283043]: 2026-01-23 10:33:46.528586691 +0000 UTC m=+0.052042398 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 10:33:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:33:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:46.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:33:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:33:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:47.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:33:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:47.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:48 compute-0 ceph-mon[74335]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:48 compute-0 sudo[283065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:33:48 compute-0 sudo[283065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:33:48 compute-0 sudo[283065]: pam_unix(sudo:session): session closed for user root
Jan 23 10:33:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:33:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2430308627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:33:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:33:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2430308627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:33:48 compute-0 nova_compute[249229]: 2026-01-23 10:33:48.742 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:48.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:48.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2430308627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:33:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2430308627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:33:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:49.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:49] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:33:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:49] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:33:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:33:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:33:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:33:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:33:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:33:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:33:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:33:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:33:50 compute-0 ceph-mon[74335]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:33:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:50.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:51 compute-0 nova_compute[249229]: 2026-01-23 10:33:51.462 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:51.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:52 compute-0 ceph-mon[74335]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:53.757Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:53 compute-0 nova_compute[249229]: 2026-01-23 10:33:53.765 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:53 compute-0 ceph-mon[74335]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:53.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:33:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:54.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:33:55 compute-0 ceph-mon[74335]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:56 compute-0 nova_compute[249229]: 2026-01-23 10:33:56.465 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:56.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:33:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:57.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:57.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:57 compute-0 ceph-mon[74335]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:33:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:33:58 compute-0 nova_compute[249229]: 2026-01-23 10:33:58.767 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:33:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:33:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:58.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:33:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:58.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:33:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:58.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:33:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:33:58.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:33:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:33:59.789 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:33:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:33:59.790 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:33:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:33:59.790 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:33:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:33:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:33:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:59.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:33:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:59] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:33:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:33:59] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:34:00 compute-0 ceph-mon[74335]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:34:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:00.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:34:01 compute-0 nova_compute[249229]: 2026-01-23 10:34:01.468 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:01 compute-0 ceph-mon[74335]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:34:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:01.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:34:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:02.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:03.758Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:34:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:03.758Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:34:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:03.758Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:34:03 compute-0 nova_compute[249229]: 2026-01-23 10:34:03.769 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:03.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:03 compute-0 ceph-mon[74335]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:04.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:34:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:34:05 compute-0 sudo[283107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:34:05 compute-0 sudo[283107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:05 compute-0 sudo[283107]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:05 compute-0 sudo[283132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:34:05 compute-0 sudo[283132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:05.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:05 compute-0 sudo[283132]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:05 compute-0 ceph-mon[74335]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:34:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:34:06 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:34:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:34:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:34:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 794 B/s rd, 0 op/s
Jan 23 10:34:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:34:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:34:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:34:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:34:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:34:06 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:34:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:34:06 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:34:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:34:06 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:34:06 compute-0 sudo[283188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:34:06 compute-0 sudo[283188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:06 compute-0 sudo[283188]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:06 compute-0 sudo[283213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:34:06 compute-0 sudo[283213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:06 compute-0 nova_compute[249229]: 2026-01-23 10:34:06.471 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:06 compute-0 podman[283279]: 2026-01-23 10:34:06.638842434 +0000 UTC m=+0.041789887 container create b7e0099fc880cb3879cd8266342f8b81b374fcfba21caef883fe1d1e0e264096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 10:34:06 compute-0 systemd[1]: Started libpod-conmon-b7e0099fc880cb3879cd8266342f8b81b374fcfba21caef883fe1d1e0e264096.scope.
Jan 23 10:34:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:34:06 compute-0 podman[283279]: 2026-01-23 10:34:06.620846874 +0000 UTC m=+0.023794337 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:34:06 compute-0 podman[283279]: 2026-01-23 10:34:06.716375376 +0000 UTC m=+0.119322849 container init b7e0099fc880cb3879cd8266342f8b81b374fcfba21caef883fe1d1e0e264096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shockley, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:34:06 compute-0 podman[283279]: 2026-01-23 10:34:06.72286231 +0000 UTC m=+0.125809773 container start b7e0099fc880cb3879cd8266342f8b81b374fcfba21caef883fe1d1e0e264096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:34:06 compute-0 jovial_shockley[283295]: 167 167
Jan 23 10:34:06 compute-0 systemd[1]: libpod-b7e0099fc880cb3879cd8266342f8b81b374fcfba21caef883fe1d1e0e264096.scope: Deactivated successfully.
Jan 23 10:34:06 compute-0 podman[283279]: 2026-01-23 10:34:06.729033365 +0000 UTC m=+0.131980848 container attach b7e0099fc880cb3879cd8266342f8b81b374fcfba21caef883fe1d1e0e264096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 10:34:06 compute-0 podman[283279]: 2026-01-23 10:34:06.73026671 +0000 UTC m=+0.133214163 container died b7e0099fc880cb3879cd8266342f8b81b374fcfba21caef883fe1d1e0e264096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-448f2d0a1fe225f6c015d4232d6948c765666021f1ac7ca392de41719d762aff-merged.mount: Deactivated successfully.
Jan 23 10:34:06 compute-0 podman[283279]: 2026-01-23 10:34:06.767104586 +0000 UTC m=+0.170052039 container remove b7e0099fc880cb3879cd8266342f8b81b374fcfba21caef883fe1d1e0e264096 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:34:06 compute-0 systemd[1]: libpod-conmon-b7e0099fc880cb3879cd8266342f8b81b374fcfba21caef883fe1d1e0e264096.scope: Deactivated successfully.
Jan 23 10:34:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:06.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:06 compute-0 podman[283320]: 2026-01-23 10:34:06.936908827 +0000 UTC m=+0.041326054 container create 23342dc563832313d72a79f23e15cd3f3debfe08783ed52bc9cc836672d24d09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feistel, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:34:06 compute-0 systemd[1]: Started libpod-conmon-23342dc563832313d72a79f23e15cd3f3debfe08783ed52bc9cc836672d24d09.scope.
Jan 23 10:34:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:34:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:34:06 compute-0 ceph-mon[74335]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 794 B/s rd, 0 op/s
Jan 23 10:34:06 compute-0 ceph-mon[74335]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:34:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:34:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:34:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:34:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:34:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca4b22be30d226616d845e96d06fa450dec576d52be4e26d4e4b768ea4c65f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca4b22be30d226616d845e96d06fa450dec576d52be4e26d4e4b768ea4c65f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca4b22be30d226616d845e96d06fa450dec576d52be4e26d4e4b768ea4c65f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca4b22be30d226616d845e96d06fa450dec576d52be4e26d4e4b768ea4c65f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca4b22be30d226616d845e96d06fa450dec576d52be4e26d4e4b768ea4c65f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:07 compute-0 podman[283320]: 2026-01-23 10:34:06.919725419 +0000 UTC m=+0.024142666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:34:07 compute-0 podman[283320]: 2026-01-23 10:34:07.023611958 +0000 UTC m=+0.128029235 container init 23342dc563832313d72a79f23e15cd3f3debfe08783ed52bc9cc836672d24d09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feistel, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 10:34:07 compute-0 podman[283320]: 2026-01-23 10:34:07.02999537 +0000 UTC m=+0.134412597 container start 23342dc563832313d72a79f23e15cd3f3debfe08783ed52bc9cc836672d24d09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 23 10:34:07 compute-0 podman[283320]: 2026-01-23 10:34:07.033272093 +0000 UTC m=+0.137689340 container attach 23342dc563832313d72a79f23e15cd3f3debfe08783ed52bc9cc836672d24d09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Jan 23 10:34:07 compute-0 priceless_feistel[283336]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:34:07 compute-0 priceless_feistel[283336]: --> All data devices are unavailable
Jan 23 10:34:07 compute-0 systemd[1]: libpod-23342dc563832313d72a79f23e15cd3f3debfe08783ed52bc9cc836672d24d09.scope: Deactivated successfully.
Jan 23 10:34:07 compute-0 podman[283351]: 2026-01-23 10:34:07.407643801 +0000 UTC m=+0.024092975 container died 23342dc563832313d72a79f23e15cd3f3debfe08783ed52bc9cc836672d24d09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Jan 23 10:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-eca4b22be30d226616d845e96d06fa450dec576d52be4e26d4e4b768ea4c65f0-merged.mount: Deactivated successfully.
Jan 23 10:34:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:07 compute-0 podman[283351]: 2026-01-23 10:34:07.457311082 +0000 UTC m=+0.073760236 container remove 23342dc563832313d72a79f23e15cd3f3debfe08783ed52bc9cc836672d24d09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_feistel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 10:34:07 compute-0 systemd[1]: libpod-conmon-23342dc563832313d72a79f23e15cd3f3debfe08783ed52bc9cc836672d24d09.scope: Deactivated successfully.
Jan 23 10:34:07 compute-0 sudo[283213]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:07 compute-0 sudo[283366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:34:07 compute-0 sudo[283366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:07 compute-0 sudo[283366]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:07 compute-0 sudo[283397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:34:07 compute-0 sudo[283397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:07 compute-0 podman[283390]: 2026-01-23 10:34:07.697918713 +0000 UTC m=+0.092985311 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 10:34:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:07.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:07.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:08 compute-0 podman[283483]: 2026-01-23 10:34:08.040638122 +0000 UTC m=+0.040544182 container create 965fa2baf4efb6a2bf877dfa87d04dfd69c7235781bbafe4c19efb7b089b21ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:34:08 compute-0 systemd[1]: Started libpod-conmon-965fa2baf4efb6a2bf877dfa87d04dfd69c7235781bbafe4c19efb7b089b21ea.scope.
Jan 23 10:34:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:34:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:08 compute-0 podman[283483]: 2026-01-23 10:34:08.022786405 +0000 UTC m=+0.022692485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:34:08 compute-0 podman[283483]: 2026-01-23 10:34:08.11982479 +0000 UTC m=+0.119730870 container init 965fa2baf4efb6a2bf877dfa87d04dfd69c7235781bbafe4c19efb7b089b21ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:34:08 compute-0 podman[283483]: 2026-01-23 10:34:08.130471482 +0000 UTC m=+0.130377532 container start 965fa2baf4efb6a2bf877dfa87d04dfd69c7235781bbafe4c19efb7b089b21ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 10:34:08 compute-0 podman[283483]: 2026-01-23 10:34:08.133544499 +0000 UTC m=+0.133450579 container attach 965fa2baf4efb6a2bf877dfa87d04dfd69c7235781bbafe4c19efb7b089b21ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_boyd, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:34:08 compute-0 wonderful_boyd[283500]: 167 167
Jan 23 10:34:08 compute-0 systemd[1]: libpod-965fa2baf4efb6a2bf877dfa87d04dfd69c7235781bbafe4c19efb7b089b21ea.scope: Deactivated successfully.
Jan 23 10:34:08 compute-0 podman[283483]: 2026-01-23 10:34:08.13920671 +0000 UTC m=+0.139112770 container died 965fa2baf4efb6a2bf877dfa87d04dfd69c7235781bbafe4c19efb7b089b21ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_boyd, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 10:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbe0d10c1fa9aef2956c2e480be5b170bdf552d9250b6cc57cfa82946dab449a-merged.mount: Deactivated successfully.
Jan 23 10:34:08 compute-0 podman[283483]: 2026-01-23 10:34:08.190055674 +0000 UTC m=+0.189961734 container remove 965fa2baf4efb6a2bf877dfa87d04dfd69c7235781bbafe4c19efb7b089b21ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 10:34:08 compute-0 systemd[1]: libpod-conmon-965fa2baf4efb6a2bf877dfa87d04dfd69c7235781bbafe4c19efb7b089b21ea.scope: Deactivated successfully.
Jan 23 10:34:08 compute-0 podman[283523]: 2026-01-23 10:34:08.375687824 +0000 UTC m=+0.045802801 container create 666051b689a353c227f67447b8d01a43021a5a694f7e0be787e8f34b54e9bc29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:34:08 compute-0 systemd[1]: Started libpod-conmon-666051b689a353c227f67447b8d01a43021a5a694f7e0be787e8f34b54e9bc29.scope.
Jan 23 10:34:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f8e9f762ffeda0e7a2e011656a8d78a031bc2becdbe69e251b93a7fc2e456f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f8e9f762ffeda0e7a2e011656a8d78a031bc2becdbe69e251b93a7fc2e456f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f8e9f762ffeda0e7a2e011656a8d78a031bc2becdbe69e251b93a7fc2e456f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f8e9f762ffeda0e7a2e011656a8d78a031bc2becdbe69e251b93a7fc2e456f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:08 compute-0 podman[283523]: 2026-01-23 10:34:08.354454921 +0000 UTC m=+0.024569928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:34:08 compute-0 podman[283523]: 2026-01-23 10:34:08.459546035 +0000 UTC m=+0.129661042 container init 666051b689a353c227f67447b8d01a43021a5a694f7e0be787e8f34b54e9bc29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 10:34:08 compute-0 podman[283523]: 2026-01-23 10:34:08.466709958 +0000 UTC m=+0.136824935 container start 666051b689a353c227f67447b8d01a43021a5a694f7e0be787e8f34b54e9bc29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 10:34:08 compute-0 podman[283523]: 2026-01-23 10:34:08.474849979 +0000 UTC m=+0.144964976 container attach 666051b689a353c227f67447b8d01a43021a5a694f7e0be787e8f34b54e9bc29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:34:08 compute-0 sudo[283542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:34:08 compute-0 sudo[283542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:08 compute-0 sudo[283542]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:08 compute-0 laughing_swartz[283539]: {
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:     "1": [
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:         {
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "devices": [
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "/dev/loop3"
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             ],
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "lv_name": "ceph_lv0",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "lv_size": "21470642176",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "name": "ceph_lv0",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "tags": {
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.cluster_name": "ceph",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.crush_device_class": "",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.encrypted": "0",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.osd_id": "1",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.type": "block",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.vdo": "0",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:                 "ceph.with_tpm": "0"
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             },
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "type": "block",
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:             "vg_name": "ceph_vg0"
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:         }
Jan 23 10:34:08 compute-0 laughing_swartz[283539]:     ]
Jan 23 10:34:08 compute-0 laughing_swartz[283539]: }
Jan 23 10:34:08 compute-0 nova_compute[249229]: 2026-01-23 10:34:08.773 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:08 compute-0 systemd[1]: libpod-666051b689a353c227f67447b8d01a43021a5a694f7e0be787e8f34b54e9bc29.scope: Deactivated successfully.
Jan 23 10:34:08 compute-0 podman[283523]: 2026-01-23 10:34:08.794635818 +0000 UTC m=+0.464750795 container died 666051b689a353c227f67447b8d01a43021a5a694f7e0be787e8f34b54e9bc29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 23 10:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9f8e9f762ffeda0e7a2e011656a8d78a031bc2becdbe69e251b93a7fc2e456f-merged.mount: Deactivated successfully.
Jan 23 10:34:08 compute-0 podman[283523]: 2026-01-23 10:34:08.839796481 +0000 UTC m=+0.509911458 container remove 666051b689a353c227f67447b8d01a43021a5a694f7e0be787e8f34b54e9bc29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 10:34:08 compute-0 systemd[1]: libpod-conmon-666051b689a353c227f67447b8d01a43021a5a694f7e0be787e8f34b54e9bc29.scope: Deactivated successfully.
Jan 23 10:34:08 compute-0 sudo[283397]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:08.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:08 compute-0 sudo[283586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:34:08 compute-0 sudo[283586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:08 compute-0 sudo[283586]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:08.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:08 compute-0 sudo[283611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:34:09 compute-0 sudo[283611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:09 compute-0 ceph-mon[74335]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:09 compute-0 podman[283676]: 2026-01-23 10:34:09.386922554 +0000 UTC m=+0.034932973 container create 2c00aa066e49497ffa1edf5e08543958d69de15d3d313adf85717f68478fdaaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:34:09 compute-0 systemd[1]: Started libpod-conmon-2c00aa066e49497ffa1edf5e08543958d69de15d3d313adf85717f68478fdaaf.scope.
Jan 23 10:34:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:34:09 compute-0 podman[283676]: 2026-01-23 10:34:09.465145665 +0000 UTC m=+0.113156114 container init 2c00aa066e49497ffa1edf5e08543958d69de15d3d313adf85717f68478fdaaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 10:34:09 compute-0 podman[283676]: 2026-01-23 10:34:09.371048473 +0000 UTC m=+0.019058912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:34:09 compute-0 podman[283676]: 2026-01-23 10:34:09.471412443 +0000 UTC m=+0.119422862 container start 2c00aa066e49497ffa1edf5e08543958d69de15d3d313adf85717f68478fdaaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_kirch, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 10:34:09 compute-0 sleepy_kirch[283692]: 167 167
Jan 23 10:34:09 compute-0 podman[283676]: 2026-01-23 10:34:09.475209791 +0000 UTC m=+0.123220230 container attach 2c00aa066e49497ffa1edf5e08543958d69de15d3d313adf85717f68478fdaaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_kirch, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:34:09 compute-0 systemd[1]: libpod-2c00aa066e49497ffa1edf5e08543958d69de15d3d313adf85717f68478fdaaf.scope: Deactivated successfully.
Jan 23 10:34:09 compute-0 podman[283676]: 2026-01-23 10:34:09.476053015 +0000 UTC m=+0.124063434 container died 2c00aa066e49497ffa1edf5e08543958d69de15d3d313adf85717f68478fdaaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Jan 23 10:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5196d076f88809dd489336c8629cad8748453ec16c79cb5ab2a0f9531621a5e-merged.mount: Deactivated successfully.
Jan 23 10:34:09 compute-0 podman[283676]: 2026-01-23 10:34:09.518993774 +0000 UTC m=+0.167004193 container remove 2c00aa066e49497ffa1edf5e08543958d69de15d3d313adf85717f68478fdaaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_kirch, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 23 10:34:09 compute-0 systemd[1]: libpod-conmon-2c00aa066e49497ffa1edf5e08543958d69de15d3d313adf85717f68478fdaaf.scope: Deactivated successfully.
Jan 23 10:34:09 compute-0 podman[283716]: 2026-01-23 10:34:09.668425116 +0000 UTC m=+0.039536763 container create f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_jang, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 10:34:09 compute-0 systemd[1]: Started libpod-conmon-f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5.scope.
Jan 23 10:34:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29026ab75b622e3c0eaf6850ecf0415c744c679094265678b9b034a1f6beda91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29026ab75b622e3c0eaf6850ecf0415c744c679094265678b9b034a1f6beda91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29026ab75b622e3c0eaf6850ecf0415c744c679094265678b9b034a1f6beda91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29026ab75b622e3c0eaf6850ecf0415c744c679094265678b9b034a1f6beda91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:34:09 compute-0 podman[283716]: 2026-01-23 10:34:09.651869186 +0000 UTC m=+0.022980863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:34:09 compute-0 podman[283716]: 2026-01-23 10:34:09.754288454 +0000 UTC m=+0.125400131 container init f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:34:09 compute-0 podman[283716]: 2026-01-23 10:34:09.761878449 +0000 UTC m=+0.132990096 container start f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_jang, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:34:09 compute-0 podman[283716]: 2026-01-23 10:34:09.765034169 +0000 UTC m=+0.136145816 container attach f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Jan 23 10:34:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:09.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:09] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:34:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:09] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:34:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:10 compute-0 ceph-mon[74335]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:10 compute-0 lvm[283807]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:34:10 compute-0 lvm[283807]: VG ceph_vg0 finished
Jan 23 10:34:10 compute-0 sharp_jang[283732]: {}
Jan 23 10:34:10 compute-0 systemd[1]: libpod-f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5.scope: Deactivated successfully.
Jan 23 10:34:10 compute-0 systemd[1]: libpod-f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5.scope: Consumed 1.110s CPU time.
Jan 23 10:34:10 compute-0 podman[283716]: 2026-01-23 10:34:10.46353212 +0000 UTC m=+0.834643777 container died f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_jang, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:34:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:10.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-29026ab75b622e3c0eaf6850ecf0415c744c679094265678b9b034a1f6beda91-merged.mount: Deactivated successfully.
Jan 23 10:34:11 compute-0 podman[283716]: 2026-01-23 10:34:11.205962958 +0000 UTC m=+1.577074605 container remove f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_jang, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:34:11 compute-0 systemd[1]: libpod-conmon-f89373116d94a0375ea3f8e1fc6f39d8102a56cc4b8539211b320af3d9830cd5.scope: Deactivated successfully.
Jan 23 10:34:11 compute-0 sudo[283611]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:34:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:34:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:34:11 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:34:11 compute-0 sudo[283825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:34:11 compute-0 sudo[283825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:11 compute-0 sudo[283825]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:11 compute-0 nova_compute[249229]: 2026-01-23 10:34:11.474 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:11 compute-0 nova_compute[249229]: 2026-01-23 10:34:11.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:34:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:11.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:34:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:34:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:12.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:13 compute-0 ceph-mon[74335]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:13.759Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:13 compute-0 nova_compute[249229]: 2026-01-23 10:34:13.773 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:13.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:14 compute-0 nova_compute[249229]: 2026-01-23 10:34:14.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:34:14 compute-0 nova_compute[249229]: 2026-01-23 10:34:14.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:34:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:14.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:15 compute-0 ceph-mon[74335]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 0 op/s
Jan 23 10:34:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:15.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:16 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Jan 23 10:34:16 compute-0 nova_compute[249229]: 2026-01-23 10:34:16.478 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:16 compute-0 ceph-mon[74335]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Jan 23 10:34:16 compute-0 nova_compute[249229]: 2026-01-23 10:34:16.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:34:16 compute-0 nova_compute[249229]: 2026-01-23 10:34:16.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:34:16 compute-0 nova_compute[249229]: 2026-01-23 10:34:16.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:34:16 compute-0 nova_compute[249229]: 2026-01-23 10:34:16.733 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:34:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:16.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:17 compute-0 podman[283856]: 2026-01-23 10:34:17.561403933 +0000 UTC m=+0.085774866 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 10:34:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:17.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:34:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:17.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:34:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:17.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:18 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:18 compute-0 nova_compute[249229]: 2026-01-23 10:34:18.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:34:18 compute-0 nova_compute[249229]: 2026-01-23 10:34:18.775 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:34:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:34:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:18.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:19 compute-0 ceph-mon[74335]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:19 compute-0 nova_compute[249229]: 2026-01-23 10:34:19.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:34:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:19] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:34:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:19] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:34:20
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups', '.mgr', 'default.rgw.log', 'images', 'default.rgw.meta', '.rgw.root', '.nfs', 'default.rgw.control', 'vms']
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.065 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.066 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.066 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.066 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.066 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:34:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:34:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:34:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:34:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/740635230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.499 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:34:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.681 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.683 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4483MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.683 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:34:20 compute-0 nova_compute[249229]: 2026-01-23 10:34:20.683 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:34:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1917450218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:34:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:34:20 compute-0 ceph-mon[74335]: pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/740635230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:34:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/505590307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:34:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:20.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:21 compute-0 nova_compute[249229]: 2026-01-23 10:34:21.259 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:34:21 compute-0 nova_compute[249229]: 2026-01-23 10:34:21.260 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:34:21 compute-0 nova_compute[249229]: 2026-01-23 10:34:21.279 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:34:21 compute-0 nova_compute[249229]: 2026-01-23 10:34:21.481 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:21 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:34:21 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1536309353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:34:21 compute-0 nova_compute[249229]: 2026-01-23 10:34:21.726 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:34:21 compute-0 nova_compute[249229]: 2026-01-23 10:34:21.732 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:34:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:21 compute-0 nova_compute[249229]: 2026-01-23 10:34:21.891 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:34:21 compute-0 nova_compute[249229]: 2026-01-23 10:34:21.893 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:34:21 compute-0 nova_compute[249229]: 2026-01-23 10:34:21.893 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:34:22 compute-0 ceph-mgr[74633]: [devicehealth INFO root] Check health
Jan 23 10:34:22 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:22 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1049699909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:34:22 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1536309353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:34:22 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1417754147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:34:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:22 compute-0 nova_compute[249229]: 2026-01-23 10:34:22.894 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:34:22 compute-0 nova_compute[249229]: 2026-01-23 10:34:22.895 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:34:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:23 compute-0 ceph-mon[74335]: pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:23.760Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:23 compute-0 nova_compute[249229]: 2026-01-23 10:34:23.777 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:23.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:24 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:25 compute-0 ceph-mon[74335]: pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:25 compute-0 nova_compute[249229]: 2026-01-23 10:34:25.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:34:25 compute-0 nova_compute[249229]: 2026-01-23 10:34:25.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:34:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:25.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:26 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:26 compute-0 nova_compute[249229]: 2026-01-23 10:34:26.485 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:26 compute-0 ceph-mon[74335]: pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:26.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:27.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:34:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:27.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:34:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:27.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:34:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:27.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:28 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:28 compute-0 sudo[283932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:34:28 compute-0 sudo[283932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:28 compute-0 sudo[283932]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:28 compute-0 nova_compute[249229]: 2026-01-23 10:34:28.779 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:28.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:28.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:29 compute-0 ceph-mon[74335]: pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:29.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:29] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:34:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:29] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:34:30 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:30 compute-0 ceph-mon[74335]: pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:30.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:31 compute-0 nova_compute[249229]: 2026-01-23 10:34:31.487 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:31.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:32 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:32 compute-0 ceph-mon[74335]: pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:32.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:33.760Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:33 compute-0 nova_compute[249229]: 2026-01-23 10:34:33.783 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:33.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:34 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:34.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:35 compute-0 ceph-mon[74335]: pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:34:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:34:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:35.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:34:36 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:36 compute-0 nova_compute[249229]: 2026-01-23 10:34:36.490 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:36.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:37 compute-0 ceph-mon[74335]: pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:37.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:37.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:38 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:38 compute-0 podman[283967]: 2026-01-23 10:34:38.586795639 +0000 UTC m=+0.116252941 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 10:34:38 compute-0 nova_compute[249229]: 2026-01-23 10:34:38.784 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:34:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:38.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:34:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:38.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:39 compute-0 ceph-mon[74335]: pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:39.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:39] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:34:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:39] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:34:40 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:40 compute-0 ceph-mon[74335]: pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:40.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:41 compute-0 nova_compute[249229]: 2026-01-23 10:34:41.493 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:41.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:42 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:42.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:43 compute-0 ceph-mon[74335]: pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:43.761Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:34:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:43.762Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:34:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:43.762Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:34:43 compute-0 nova_compute[249229]: 2026-01-23 10:34:43.785 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:34:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:43.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:34:44 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:44.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:45 compute-0 ceph-mon[74335]: pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:45.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:46 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:46 compute-0 nova_compute[249229]: 2026-01-23 10:34:46.496 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:34:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:46.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:34:47 compute-0 ceph-mon[74335]: pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:47.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:47.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:48 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:48 compute-0 podman[284004]: 2026-01-23 10:34:48.52245422 +0000 UTC m=+0.049865637 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 10:34:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=cleanup t=2026-01-23T10:34:48.641654034Z level=info msg="Completed cleanup jobs" duration=17.60909ms
Jan 23 10:34:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:34:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4134536473' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:34:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:34:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4134536473' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:34:48 compute-0 sudo[284025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:34:48 compute-0 sudo[284025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:34:48 compute-0 sudo[284025]: pam_unix(sudo:session): session closed for user root
Jan 23 10:34:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=plugins.update.checker t=2026-01-23T10:34:48.744842794Z level=info msg="Update check succeeded" duration=50.823943ms
Jan 23 10:34:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0[104501]: logger=grafana.update.checker t=2026-01-23T10:34:48.745329007Z level=info msg="Update check succeeded" duration=48.286511ms
Jan 23 10:34:48 compute-0 nova_compute[249229]: 2026-01-23 10:34:48.787 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:48.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:48.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:49 compute-0 ceph-mon[74335]: pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/4134536473' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:34:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/4134536473' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:34:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:49.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:49] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:34:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:49] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:34:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:34:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:34:50 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:34:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:34:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:34:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:34:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:34:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:34:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:34:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:50.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:51 compute-0 ceph-mon[74335]: pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:51 compute-0 nova_compute[249229]: 2026-01-23 10:34:51.500 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:51.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:52 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:52.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:53 compute-0 ceph-mon[74335]: pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:53.762Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:34:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:53.763Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:53 compute-0 nova_compute[249229]: 2026-01-23 10:34:53.789 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:34:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:53.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:34:54 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:34:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:54.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:34:55 compute-0 ceph-mon[74335]: pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:55.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:56 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:56 compute-0 nova_compute[249229]: 2026-01-23 10:34:56.501 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:56.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:34:57 compute-0 ceph-mon[74335]: pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:34:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:57.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:57.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:58 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:58 compute-0 ceph-mon[74335]: pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:34:58 compute-0 nova_compute[249229]: 2026-01-23 10:34:58.790 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:34:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:34:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:58.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:34:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:34:58.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:34:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:34:59.791 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:34:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:34:59.791 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:34:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:34:59.791 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:34:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:34:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:34:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:34:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:59] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:34:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:34:59] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:35:00 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:00.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:01 compute-0 ceph-mon[74335]: pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:01 compute-0 nova_compute[249229]: 2026-01-23 10:35:01.552 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:01.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:02 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:02 compute-0 ceph-mon[74335]: pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:02.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:03.764Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:03 compute-0 nova_compute[249229]: 2026-01-23 10:35:03.840 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:03.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:04 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:04 compute-0 ceph-mon[74335]: pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:35:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:35:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:35:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:05.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:06 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:06 compute-0 nova_compute[249229]: 2026-01-23 10:35:06.556 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:06 compute-0 ceph-mon[74335]: pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:06.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:07.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:07.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:08 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:08 compute-0 nova_compute[249229]: 2026-01-23 10:35:08.842 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:08.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:08.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:09 compute-0 sudo[284070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:35:09 compute-0 sudo[284070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:09 compute-0 sudo[284070]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:09 compute-0 podman[284094]: 2026-01-23 10:35:09.173315152 +0000 UTC m=+0.081164435 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 10:35:09 compute-0 ceph-mon[74335]: pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:09.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:35:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:35:10 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:10 compute-0 ceph-mon[74335]: pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:10.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:11 compute-0 nova_compute[249229]: 2026-01-23 10:35:11.616 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:11 compute-0 sudo[284123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:35:11 compute-0 sudo[284123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:11 compute-0 sudo[284123]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:11 compute-0 sudo[284148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:35:11 compute-0 sudo[284148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:11.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:12 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:12 compute-0 sudo[284148]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 23 10:35:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 10:35:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 23 10:35:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 10:35:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 10:35:12 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 10:35:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:12 compute-0 nova_compute[249229]: 2026-01-23 10:35:12.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:12.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:13 compute-0 ceph-mon[74335]: pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 23 10:35:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:13.765Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:13 compute-0 nova_compute[249229]: 2026-01-23 10:35:13.844 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 23 10:35:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:13.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:14 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:14 compute-0 nova_compute[249229]: 2026-01-23 10:35:14.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:14 compute-0 nova_compute[249229]: 2026-01-23 10:35:14.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:35:14 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:14.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:15 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:15 compute-0 ceph-mon[74335]: pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:15 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 23 10:35:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 10:35:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:35:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:35:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:35:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:35:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:35:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:35:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:15.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:35:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:35:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:35:15 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:35:15 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:35:15 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:35:16 compute-0 sudo[284209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:35:16 compute-0 sudo[284209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:16 compute-0 sudo[284209]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:16 compute-0 sudo[284234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:35:16 compute-0 sudo[284234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:16 compute-0 podman[284303]: 2026-01-23 10:35:16.46543148 +0000 UTC m=+0.027952075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:35:16 compute-0 nova_compute[249229]: 2026-01-23 10:35:16.636 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:16.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:17 compute-0 podman[284303]: 2026-01-23 10:35:17.017700819 +0000 UTC m=+0.580221394 container create 4b6a597c67e246633ef2ba0345bce46069716ade1cd9f41d2e82776f7a2547c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 23 10:35:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 10:35:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:35:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:35:17 compute-0 ceph-mon[74335]: pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:35:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:35:17 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:35:17 compute-0 systemd[1]: Started libpod-conmon-4b6a597c67e246633ef2ba0345bce46069716ade1cd9f41d2e82776f7a2547c5.scope.
Jan 23 10:35:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:35:17 compute-0 podman[284303]: 2026-01-23 10:35:17.29353541 +0000 UTC m=+0.856056005 container init 4b6a597c67e246633ef2ba0345bce46069716ade1cd9f41d2e82776f7a2547c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 10:35:17 compute-0 podman[284303]: 2026-01-23 10:35:17.300690183 +0000 UTC m=+0.863210758 container start 4b6a597c67e246633ef2ba0345bce46069716ade1cd9f41d2e82776f7a2547c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 10:35:17 compute-0 nifty_blackwell[284320]: 167 167
Jan 23 10:35:17 compute-0 podman[284303]: 2026-01-23 10:35:17.306309573 +0000 UTC m=+0.868830178 container attach 4b6a597c67e246633ef2ba0345bce46069716ade1cd9f41d2e82776f7a2547c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:35:17 compute-0 systemd[1]: libpod-4b6a597c67e246633ef2ba0345bce46069716ade1cd9f41d2e82776f7a2547c5.scope: Deactivated successfully.
Jan 23 10:35:17 compute-0 podman[284303]: 2026-01-23 10:35:17.306991842 +0000 UTC m=+0.869512417 container died 4b6a597c67e246633ef2ba0345bce46069716ade1cd9f41d2e82776f7a2547c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 10:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-de17d6b83f5d13d8ec2ef56eb1d6e9b81c2e296e60844bda78ef6662c008e718-merged.mount: Deactivated successfully.
Jan 23 10:35:17 compute-0 podman[284303]: 2026-01-23 10:35:17.371386741 +0000 UTC m=+0.933907316 container remove 4b6a597c67e246633ef2ba0345bce46069716ade1cd9f41d2e82776f7a2547c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_blackwell, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:35:17 compute-0 systemd[1]: libpod-conmon-4b6a597c67e246633ef2ba0345bce46069716ade1cd9f41d2e82776f7a2547c5.scope: Deactivated successfully.
Jan 23 10:35:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:17 compute-0 podman[284343]: 2026-01-23 10:35:17.515005228 +0000 UTC m=+0.025843855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:35:17 compute-0 nova_compute[249229]: 2026-01-23 10:35:17.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:17 compute-0 nova_compute[249229]: 2026-01-23 10:35:17.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:35:17 compute-0 nova_compute[249229]: 2026-01-23 10:35:17.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:35:17 compute-0 podman[284343]: 2026-01-23 10:35:17.830846325 +0000 UTC m=+0.341684932 container create 9d81867e971e645c0f7d504b01483b99228d22c3713f38101736dc5d3d200351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:35:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:17.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:17 compute-0 nova_compute[249229]: 2026-01-23 10:35:17.905 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:35:17 compute-0 systemd[1]: Started libpod-conmon-9d81867e971e645c0f7d504b01483b99228d22c3713f38101736dc5d3d200351.scope.
Jan 23 10:35:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:17 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3a0a65b7beec9e3f7bbdfa7f8ce48fe6ee3aabad8a756c143a6ab5d782405d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3a0a65b7beec9e3f7bbdfa7f8ce48fe6ee3aabad8a756c143a6ab5d782405d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3a0a65b7beec9e3f7bbdfa7f8ce48fe6ee3aabad8a756c143a6ab5d782405d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3a0a65b7beec9e3f7bbdfa7f8ce48fe6ee3aabad8a756c143a6ab5d782405d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3a0a65b7beec9e3f7bbdfa7f8ce48fe6ee3aabad8a756c143a6ab5d782405d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:17 compute-0 podman[284343]: 2026-01-23 10:35:17.989590212 +0000 UTC m=+0.500428839 container init 9d81867e971e645c0f7d504b01483b99228d22c3713f38101736dc5d3d200351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:35:17 compute-0 podman[284343]: 2026-01-23 10:35:17.997176427 +0000 UTC m=+0.508015034 container start 9d81867e971e645c0f7d504b01483b99228d22c3713f38101736dc5d3d200351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:35:18 compute-0 podman[284343]: 2026-01-23 10:35:18.019606754 +0000 UTC m=+0.530445381 container attach 9d81867e971e645c0f7d504b01483b99228d22c3713f38101736dc5d3d200351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 23 10:35:18 compute-0 hopeful_poitras[284361]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:35:18 compute-0 hopeful_poitras[284361]: --> All data devices are unavailable
Jan 23 10:35:18 compute-0 systemd[1]: libpod-9d81867e971e645c0f7d504b01483b99228d22c3713f38101736dc5d3d200351.scope: Deactivated successfully.
Jan 23 10:35:18 compute-0 podman[284343]: 2026-01-23 10:35:18.36077755 +0000 UTC m=+0.871616157 container died 9d81867e971e645c0f7d504b01483b99228d22c3713f38101736dc5d3d200351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a3a0a65b7beec9e3f7bbdfa7f8ce48fe6ee3aabad8a756c143a6ab5d782405d-merged.mount: Deactivated successfully.
Jan 23 10:35:18 compute-0 ceph-mon[74335]: pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:18 compute-0 podman[284343]: 2026-01-23 10:35:18.637453325 +0000 UTC m=+1.148291932 container remove 9d81867e971e645c0f7d504b01483b99228d22c3713f38101736dc5d3d200351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:35:18 compute-0 systemd[1]: libpod-conmon-9d81867e971e645c0f7d504b01483b99228d22c3713f38101736dc5d3d200351.scope: Deactivated successfully.
Jan 23 10:35:18 compute-0 sudo[284234]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:18 compute-0 podman[284392]: 2026-01-23 10:35:18.732167404 +0000 UTC m=+0.056644589 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 10:35:18 compute-0 sudo[284398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:35:18 compute-0 sudo[284398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:18 compute-0 sudo[284398]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:18 compute-0 sudo[284436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:35:18 compute-0 sudo[284436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:18 compute-0 nova_compute[249229]: 2026-01-23 10:35:18.882 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:18.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:18.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:19 compute-0 podman[284502]: 2026-01-23 10:35:19.192279467 +0000 UTC m=+0.060808957 container create 6207458fb85b52a91bf03874a39db0c4dc3780906187655df0dbf96827e21179 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:35:19 compute-0 podman[284502]: 2026-01-23 10:35:19.153499266 +0000 UTC m=+0.022028776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:35:19 compute-0 systemd[1]: Started libpod-conmon-6207458fb85b52a91bf03874a39db0c4dc3780906187655df0dbf96827e21179.scope.
Jan 23 10:35:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:35:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:19 compute-0 podman[284502]: 2026-01-23 10:35:19.502030752 +0000 UTC m=+0.370560242 container init 6207458fb85b52a91bf03874a39db0c4dc3780906187655df0dbf96827e21179 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 10:35:19 compute-0 podman[284502]: 2026-01-23 10:35:19.510878833 +0000 UTC m=+0.379408313 container start 6207458fb85b52a91bf03874a39db0c4dc3780906187655df0dbf96827e21179 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_murdock, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:35:19 compute-0 podman[284502]: 2026-01-23 10:35:19.515127653 +0000 UTC m=+0.383657173 container attach 6207458fb85b52a91bf03874a39db0c4dc3780906187655df0dbf96827e21179 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_murdock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:35:19 compute-0 recursing_murdock[284518]: 167 167
Jan 23 10:35:19 compute-0 systemd[1]: libpod-6207458fb85b52a91bf03874a39db0c4dc3780906187655df0dbf96827e21179.scope: Deactivated successfully.
Jan 23 10:35:19 compute-0 podman[284502]: 2026-01-23 10:35:19.517577643 +0000 UTC m=+0.386107143 container died 6207458fb85b52a91bf03874a39db0c4dc3780906187655df0dbf96827e21179 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_murdock, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed80841885951cd5a52ea737b2cfdecfbea989b78bbaf1d0a5d51348e6211694-merged.mount: Deactivated successfully.
Jan 23 10:35:19 compute-0 podman[284502]: 2026-01-23 10:35:19.669503516 +0000 UTC m=+0.538032996 container remove 6207458fb85b52a91bf03874a39db0c4dc3780906187655df0dbf96827e21179 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_murdock, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 23 10:35:19 compute-0 systemd[1]: libpod-conmon-6207458fb85b52a91bf03874a39db0c4dc3780906187655df0dbf96827e21179.scope: Deactivated successfully.
Jan 23 10:35:19 compute-0 nova_compute[249229]: 2026-01-23 10:35:19.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:19 compute-0 podman[284544]: 2026-01-23 10:35:19.850739551 +0000 UTC m=+0.068900057 container create fb0f486435620f45460171f913fcf1bdca82683e0a5fae088cef67f4eece6e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:35:19 compute-0 podman[284544]: 2026-01-23 10:35:19.802979235 +0000 UTC m=+0.021139771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:35:19 compute-0 systemd[1]: Started libpod-conmon-fb0f486435620f45460171f913fcf1bdca82683e0a5fae088cef67f4eece6e91.scope.
Jan 23 10:35:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:19 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:35:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:35:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:19.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa7bd020bf9fa3c6ef13ffa52cd8be095a7df10ce937e9729a936043ca4cdfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa7bd020bf9fa3c6ef13ffa52cd8be095a7df10ce937e9729a936043ca4cdfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa7bd020bf9fa3c6ef13ffa52cd8be095a7df10ce937e9729a936043ca4cdfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa7bd020bf9fa3c6ef13ffa52cd8be095a7df10ce937e9729a936043ca4cdfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:35:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:35:19 compute-0 podman[284544]: 2026-01-23 10:35:19.977925712 +0000 UTC m=+0.196086248 container init fb0f486435620f45460171f913fcf1bdca82683e0a5fae088cef67f4eece6e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_neumann, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:35:19 compute-0 podman[284544]: 2026-01-23 10:35:19.987457572 +0000 UTC m=+0.205618078 container start fb0f486435620f45460171f913fcf1bdca82683e0a5fae088cef67f4eece6e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:35:19 compute-0 podman[284544]: 2026-01-23 10:35:19.992924057 +0000 UTC m=+0.211084573 container attach fb0f486435620f45460171f913fcf1bdca82683e0a5fae088cef67f4eece6e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_neumann, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:35:20
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms']
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:35:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:35:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]: {
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:     "1": [
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:         {
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "devices": [
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "/dev/loop3"
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             ],
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "lv_name": "ceph_lv0",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "lv_size": "21470642176",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "name": "ceph_lv0",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "tags": {
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.cluster_name": "ceph",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.crush_device_class": "",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.encrypted": "0",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.osd_id": "1",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.type": "block",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.vdo": "0",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:                 "ceph.with_tpm": "0"
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             },
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "type": "block",
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:             "vg_name": "ceph_vg0"
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:         }
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]:     ]
Jan 23 10:35:20 compute-0 vibrant_neumann[284562]: }
Jan 23 10:35:20 compute-0 systemd[1]: libpod-fb0f486435620f45460171f913fcf1bdca82683e0a5fae088cef67f4eece6e91.scope: Deactivated successfully.
Jan 23 10:35:20 compute-0 podman[284544]: 2026-01-23 10:35:20.338040206 +0000 UTC m=+0.556200712 container died fb0f486435620f45460171f913fcf1bdca82683e0a5fae088cef67f4eece6e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_neumann, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aa7bd020bf9fa3c6ef13ffa52cd8be095a7df10ce937e9729a936043ca4cdfd-merged.mount: Deactivated successfully.
Jan 23 10:35:20 compute-0 podman[284544]: 2026-01-23 10:35:20.510265486 +0000 UTC m=+0.728426022 container remove fb0f486435620f45460171f913fcf1bdca82683e0a5fae088cef67f4eece6e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 23 10:35:20 compute-0 systemd[1]: libpod-conmon-fb0f486435620f45460171f913fcf1bdca82683e0a5fae088cef67f4eece6e91.scope: Deactivated successfully.
Jan 23 10:35:20 compute-0 sudo[284436]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:20 compute-0 sudo[284585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:35:20 compute-0 sudo[284585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:20 compute-0 sudo[284585]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:35:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:35:20 compute-0 sudo[284610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:35:20 compute-0 sudo[284610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:20 compute-0 nova_compute[249229]: 2026-01-23 10:35:20.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:20 compute-0 ceph-mon[74335]: pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:35:20 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/905573868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:35:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:20.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:21 compute-0 podman[284674]: 2026-01-23 10:35:21.062277027 +0000 UTC m=+0.024504166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:35:21 compute-0 podman[284674]: 2026-01-23 10:35:21.159858338 +0000 UTC m=+0.122085447 container create 5232595ce02773b46ae6664abedd271c66192d00941ff1ee4eb2fc33a9dec135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 10:35:21 compute-0 systemd[1]: Started libpod-conmon-5232595ce02773b46ae6664abedd271c66192d00941ff1ee4eb2fc33a9dec135.scope.
Jan 23 10:35:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:35:21 compute-0 podman[284674]: 2026-01-23 10:35:21.348092912 +0000 UTC m=+0.310320041 container init 5232595ce02773b46ae6664abedd271c66192d00941ff1ee4eb2fc33a9dec135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tharp, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:35:21 compute-0 podman[284674]: 2026-01-23 10:35:21.354768422 +0000 UTC m=+0.316995531 container start 5232595ce02773b46ae6664abedd271c66192d00941ff1ee4eb2fc33a9dec135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tharp, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:35:21 compute-0 ecstatic_tharp[284691]: 167 167
Jan 23 10:35:21 compute-0 systemd[1]: libpod-5232595ce02773b46ae6664abedd271c66192d00941ff1ee4eb2fc33a9dec135.scope: Deactivated successfully.
Jan 23 10:35:21 compute-0 podman[284674]: 2026-01-23 10:35:21.365811305 +0000 UTC m=+0.328038424 container attach 5232595ce02773b46ae6664abedd271c66192d00941ff1ee4eb2fc33a9dec135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:35:21 compute-0 podman[284674]: 2026-01-23 10:35:21.366301959 +0000 UTC m=+0.328529098 container died 5232595ce02773b46ae6664abedd271c66192d00941ff1ee4eb2fc33a9dec135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 10:35:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 819 B/s rd, 0 op/s
Jan 23 10:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7875c65a7c437c22cd7700f1d07e4fd28ac84e9fdaa7e7f5026c25db596c000-merged.mount: Deactivated successfully.
Jan 23 10:35:21 compute-0 podman[284674]: 2026-01-23 10:35:21.450203871 +0000 UTC m=+0.412430980 container remove 5232595ce02773b46ae6664abedd271c66192d00941ff1ee4eb2fc33a9dec135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:35:21 compute-0 systemd[1]: libpod-conmon-5232595ce02773b46ae6664abedd271c66192d00941ff1ee4eb2fc33a9dec135.scope: Deactivated successfully.
Jan 23 10:35:21 compute-0 podman[284716]: 2026-01-23 10:35:21.628816232 +0000 UTC m=+0.051187304 container create fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_buck, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 10:35:21 compute-0 nova_compute[249229]: 2026-01-23 10:35:21.639 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:21 compute-0 systemd[1]: Started libpod-conmon-fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe.scope.
Jan 23 10:35:21 compute-0 podman[284716]: 2026-01-23 10:35:21.600651102 +0000 UTC m=+0.023022014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:35:21 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469cec26e9c96ee89c05e21c2b514acdad2e258411fca8d86ed475cc16f65368/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:21 compute-0 nova_compute[249229]: 2026-01-23 10:35:21.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469cec26e9c96ee89c05e21c2b514acdad2e258411fca8d86ed475cc16f65368/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469cec26e9c96ee89c05e21c2b514acdad2e258411fca8d86ed475cc16f65368/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469cec26e9c96ee89c05e21c2b514acdad2e258411fca8d86ed475cc16f65368/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:35:21 compute-0 podman[284716]: 2026-01-23 10:35:21.745823684 +0000 UTC m=+0.168194606 container init fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_buck, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 23 10:35:21 compute-0 podman[284716]: 2026-01-23 10:35:21.753282716 +0000 UTC m=+0.175653608 container start fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_buck, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 23 10:35:21 compute-0 nova_compute[249229]: 2026-01-23 10:35:21.761 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:35:21 compute-0 nova_compute[249229]: 2026-01-23 10:35:21.762 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:35:21 compute-0 nova_compute[249229]: 2026-01-23 10:35:21.763 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:35:21 compute-0 nova_compute[249229]: 2026-01-23 10:35:21.763 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:35:21 compute-0 nova_compute[249229]: 2026-01-23 10:35:21.763 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:35:21 compute-0 podman[284716]: 2026-01-23 10:35:21.784651756 +0000 UTC m=+0.207022648 container attach fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 23 10:35:21 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/573725531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:35:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:21.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:35:22 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836247010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:35:22 compute-0 nova_compute[249229]: 2026-01-23 10:35:22.289 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:35:22 compute-0 lvm[284829]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:35:22 compute-0 lvm[284829]: VG ceph_vg0 finished
Jan 23 10:35:22 compute-0 nova_compute[249229]: 2026-01-23 10:35:22.457 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:35:22 compute-0 nova_compute[249229]: 2026-01-23 10:35:22.459 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4454MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:35:22 compute-0 nova_compute[249229]: 2026-01-23 10:35:22.460 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:35:22 compute-0 nova_compute[249229]: 2026-01-23 10:35:22.460 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:35:22 compute-0 kind_buck[284732]: {}
Jan 23 10:35:22 compute-0 systemd[1]: libpod-fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe.scope: Deactivated successfully.
Jan 23 10:35:22 compute-0 systemd[1]: libpod-fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe.scope: Consumed 1.149s CPU time.
Jan 23 10:35:22 compute-0 podman[284716]: 2026-01-23 10:35:22.51281537 +0000 UTC m=+0.935186392 container died fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_buck, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:35:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:22 compute-0 nova_compute[249229]: 2026-01-23 10:35:22.687 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:35:22 compute-0 nova_compute[249229]: 2026-01-23 10:35:22.688 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:35:22 compute-0 nova_compute[249229]: 2026-01-23 10:35:22.711 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-469cec26e9c96ee89c05e21c2b514acdad2e258411fca8d86ed475cc16f65368-merged.mount: Deactivated successfully.
Jan 23 10:35:22 compute-0 podman[284716]: 2026-01-23 10:35:22.832678501 +0000 UTC m=+1.255049403 container remove fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:35:22 compute-0 sudo[284610]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:35:22 compute-0 systemd[1]: libpod-conmon-fa55e8e690dc9e72e9fe5c1213c1267bd7faef0eca4590f32810801153214cfe.scope: Deactivated successfully.
Jan 23 10:35:22 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:35:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:22.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:35:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:35:23 compute-0 ceph-mon[74335]: pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 819 B/s rd, 0 op/s
Jan 23 10:35:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3397794839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:35:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1836247010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:35:23 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:35:23 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852357449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:35:23 compute-0 nova_compute[249229]: 2026-01-23 10:35:23.184 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:35:23 compute-0 nova_compute[249229]: 2026-01-23 10:35:23.191 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:35:23 compute-0 sudo[284869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:35:23 compute-0 sudo[284869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:23 compute-0 sudo[284869]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:23 compute-0 nova_compute[249229]: 2026-01-23 10:35:23.264 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:35:23 compute-0 nova_compute[249229]: 2026-01-23 10:35:23.266 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:35:23 compute-0 nova_compute[249229]: 2026-01-23 10:35:23.266 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:35:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:23.766Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:35:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:23.766Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:35:23 compute-0 nova_compute[249229]: 2026-01-23 10:35:23.885 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:23.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:24 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/342726147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:35:24 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:35:24 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3852357449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:35:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:24.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:25 compute-0 nova_compute[249229]: 2026-01-23 10:35:25.260 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:25 compute-0 nova_compute[249229]: 2026-01-23 10:35:25.261 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:25 compute-0 ceph-mon[74335]: pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:25 compute-0 nova_compute[249229]: 2026-01-23 10:35:25.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:25 compute-0 nova_compute[249229]: 2026-01-23 10:35:25.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:35:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:25.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:26 compute-0 nova_compute[249229]: 2026-01-23 10:35:26.642 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:26 compute-0 ceph-mon[74335]: pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 546 B/s rd, 0 op/s
Jan 23 10:35:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:35:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:35:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:27.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:28 compute-0 nova_compute[249229]: 2026-01-23 10:35:28.599 249233 DEBUG oslo_concurrency.processutils [None req-a8510dbf-d677-4163-b681-0279df98cd8c 00aca23f964f49a5a9abfea9744e871b 5220cd4f58cb43bb899e367e961bc5c1 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:35:28 compute-0 nova_compute[249229]: 2026-01-23 10:35:28.629 249233 DEBUG oslo_concurrency.processutils [None req-a8510dbf-d677-4163-b681-0279df98cd8c 00aca23f964f49a5a9abfea9744e871b 5220cd4f58cb43bb899e367e961bc5c1 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:35:28 compute-0 nova_compute[249229]: 2026-01-23 10:35:28.885 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:28.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:29 compute-0 sudo[284903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:35:29 compute-0 sudo[284903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:29 compute-0 sudo[284903]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:29 compute-0 ceph-mon[74335]: pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:29] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:35:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:29] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:35:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:31 compute-0 ceph-mon[74335]: pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:31 compute-0 nova_compute[249229]: 2026-01-23 10:35:31.643 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:35:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:35:32 compute-0 ceph-mon[74335]: pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:32.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:33.767Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:33 compute-0 nova_compute[249229]: 2026-01-23 10:35:33.889 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:33.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:34 compute-0 ceph-mon[74335]: pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:35:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:35:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:35:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:35:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:35 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:35:35.629 161921 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:02:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:46:f9:a0:85:06'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 23 10:35:35 compute-0 nova_compute[249229]: 2026-01-23 10:35:35.630 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:35 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:35:35.631 161921 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 23 10:35:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:35:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:35.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:35:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:35:36 compute-0 nova_compute[249229]: 2026-01-23 10:35:36.684 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:36.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:37 compute-0 ceph-mon[74335]: pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:37.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:37.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:38 compute-0 ceph-mon[74335]: pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:38 compute-0 nova_compute[249229]: 2026-01-23 10:35:38.889 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:38.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:39 compute-0 podman[284938]: 2026-01-23 10:35:39.570416504 +0000 UTC m=+0.091503019 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 10:35:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:39] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 23 10:35:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:39] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 23 10:35:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:39.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:40 compute-0 ceph-mon[74335]: pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:35:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:40.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:35:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:41 compute-0 nova_compute[249229]: 2026-01-23 10:35:41.687 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:41.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:42 compute-0 ceph-mon[74335]: pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:42.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:43.768Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:43 compute-0 nova_compute[249229]: 2026-01-23 10:35:43.890 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:43.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:44 compute-0 ceph-mon[74335]: pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:44.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:45 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:35:45.632 161921 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=57e418b8-f514-4483-8675-f32d2dcd8cea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 23 10:35:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:35:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:45.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:35:46 compute-0 nova_compute[249229]: 2026-01-23 10:35:46.691 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:46.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:47 compute-0 ceph-mon[74335]: pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:47.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:47.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:35:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762354554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:35:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:35:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762354554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:35:48 compute-0 nova_compute[249229]: 2026-01-23 10:35:48.892 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:48.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:35:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:48.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:35:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:48.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:49 compute-0 sudo[284975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:35:49 compute-0 sudo[284975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:35:49 compute-0 sudo[284975]: pam_unix(sudo:session): session closed for user root
Jan 23 10:35:49 compute-0 podman[284999]: 2026-01-23 10:35:49.321590955 +0000 UTC m=+0.047019046 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 10:35:49 compute-0 ceph-mon[74335]: pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1762354554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:35:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1762354554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:35:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:49] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 23 10:35:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:49] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 23 10:35:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:49.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:35:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:35:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:35:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:35:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:35:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:35:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:35:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:35:50 compute-0 ceph-mon[74335]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:35:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:50.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:51 compute-0 nova_compute[249229]: 2026-01-23 10:35:51.694 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:52.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:53 compute-0 ceph-mon[74335]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:53.769Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:53 compute-0 nova_compute[249229]: 2026-01-23 10:35:53.895 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:53.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:54 compute-0 ceph-mon[74335]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:54 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 23 10:35:54 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:54.931419) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:35:54 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 23 10:35:54 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164554931526, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1677, "num_deletes": 251, "total_data_size": 3293654, "memory_usage": 3348096, "flush_reason": "Manual Compaction"}
Jan 23 10:35:54 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 23 10:35:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:35:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164555107277, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3204080, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35540, "largest_seqno": 37216, "table_properties": {"data_size": 3196224, "index_size": 4735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16131, "raw_average_key_size": 20, "raw_value_size": 3180644, "raw_average_value_size": 3995, "num_data_blocks": 201, "num_entries": 796, "num_filter_entries": 796, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164388, "oldest_key_time": 1769164388, "file_creation_time": 1769164554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 175969 microseconds, and 7555 cpu microseconds.
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.107339) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3204080 bytes OK
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.107415) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.111907) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.111959) EVENT_LOG_v1 {"time_micros": 1769164555111948, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.111990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3286668, prev total WAL file size 3286668, number of live WAL files 2.
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.113070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3128KB)], [77(11MB)]
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164555113227, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15335447, "oldest_snapshot_seqno": -1}
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6837 keys, 13114666 bytes, temperature: kUnknown
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164555276303, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13114666, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13071075, "index_size": 25367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 179640, "raw_average_key_size": 26, "raw_value_size": 12949794, "raw_average_value_size": 1894, "num_data_blocks": 991, "num_entries": 6837, "num_filter_entries": 6837, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769164555, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.277207) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13114666 bytes
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.284495) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.9 rd, 80.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 11.6 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(8.9) write-amplify(4.1) OK, records in: 7355, records dropped: 518 output_compression: NoCompression
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.284552) EVENT_LOG_v1 {"time_micros": 1769164555284530, "job": 44, "event": "compaction_finished", "compaction_time_micros": 163398, "compaction_time_cpu_micros": 29618, "output_level": 6, "num_output_files": 1, "total_output_size": 13114666, "num_input_records": 7355, "num_output_records": 6837, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164555285438, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164555287571, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.112866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.287663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.287668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.287669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.287671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:35:55 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:35:55.287672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:35:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:56 compute-0 nova_compute[249229]: 2026-01-23 10:35:56.698 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:56.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:57 compute-0 ceph-mon[74335]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:35:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:57.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:57.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:58 compute-0 ceph-mon[74335]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:35:58 compute-0 nova_compute[249229]: 2026-01-23 10:35:58.897 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:35:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:35:58.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:35:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:58.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:35:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:35:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:35:59.791 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:35:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:35:59.791 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:35:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:35:59.792 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:35:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:59] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:35:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:35:59] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:35:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:35:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:35:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:59.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:01.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:01 compute-0 ceph-mon[74335]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:01 compute-0 nova_compute[249229]: 2026-01-23 10:36:01.701 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:36:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:01.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:36:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:02 compute-0 ceph-mon[74335]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:03.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:03.770Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:03 compute-0 nova_compute[249229]: 2026-01-23 10:36:03.972 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:04.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:05.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:36:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:36:05 compute-0 ceph-mon[74335]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:36:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:06.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:06 compute-0 nova_compute[249229]: 2026-01-23 10:36:06.704 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:06 compute-0 ceph-mon[74335]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:07.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:07.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:08.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:08.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:08 compute-0 nova_compute[249229]: 2026-01-23 10:36:08.974 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:09.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:09 compute-0 ceph-mon[74335]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:09 compute-0 sudo[285039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:36:09 compute-0 sudo[285039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:09 compute-0 sudo[285039]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:09] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:36:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:09] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:36:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:10.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:10 compute-0 podman[285065]: 2026-01-23 10:36:10.556005495 +0000 UTC m=+0.085788897 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 10:36:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:11.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:11 compute-0 ceph-mon[74335]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:11 compute-0 nova_compute[249229]: 2026-01-23 10:36:11.706 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:12.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:12 compute-0 ceph-mon[74335]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:13.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:13 compute-0 nova_compute[249229]: 2026-01-23 10:36:13.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:13.770Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:36:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:13.770Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:36:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:13.770Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:36:13 compute-0 nova_compute[249229]: 2026-01-23 10:36:13.976 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:14.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:14 compute-0 ceph-mon[74335]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:15.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:16.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:16 compute-0 nova_compute[249229]: 2026-01-23 10:36:16.710 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:16 compute-0 nova_compute[249229]: 2026-01-23 10:36:16.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:16 compute-0 nova_compute[249229]: 2026-01-23 10:36:16.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:36:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:36:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:17.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:36:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:17 compute-0 ceph-mon[74335]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:17 compute-0 nova_compute[249229]: 2026-01-23 10:36:17.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:17 compute-0 nova_compute[249229]: 2026-01-23 10:36:17.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:36:17 compute-0 nova_compute[249229]: 2026-01-23 10:36:17.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:36:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:17.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:36:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:17.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:17 compute-0 nova_compute[249229]: 2026-01-23 10:36:17.893 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:36:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:36:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:18.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:36:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:18.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:36:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:18.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:36:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:18.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:36:18 compute-0 nova_compute[249229]: 2026-01-23 10:36:18.978 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:19.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:19 compute-0 ceph-mon[74335]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:19 compute-0 podman[285100]: 2026-01-23 10:36:19.524341545 +0000 UTC m=+0.054724985 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 10:36:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:19] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:36:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:19] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Jan 23 10:36:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:20.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:36:20
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['images', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'backups', 'default.rgw.meta']
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:36:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:36:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:36:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:36:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:36:20 compute-0 nova_compute[249229]: 2026-01-23 10:36:20.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:21.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:21 compute-0 ceph-mon[74335]: pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:21 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1725171694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:36:21 compute-0 nova_compute[249229]: 2026-01-23 10:36:21.713 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:21 compute-0 nova_compute[249229]: 2026-01-23 10:36:21.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:22.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:22 compute-0 ceph-mon[74335]: pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:22 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1801997280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:36:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:23.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:23 compute-0 sudo[285123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:36:23 compute-0 sudo[285123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:23 compute-0 sudo[285123]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:23 compute-0 sudo[285148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:36:23 compute-0 sudo[285148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:23 compute-0 nova_compute[249229]: 2026-01-23 10:36:23.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:23.772Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:23 compute-0 nova_compute[249229]: 2026-01-23 10:36:23.980 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:24 compute-0 sudo[285148]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:24 compute-0 nova_compute[249229]: 2026-01-23 10:36:24.028 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:36:24 compute-0 nova_compute[249229]: 2026-01-23 10:36:24.029 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:36:24 compute-0 nova_compute[249229]: 2026-01-23 10:36:24.029 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:36:24 compute-0 nova_compute[249229]: 2026-01-23 10:36:24.029 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:36:24 compute-0 nova_compute[249229]: 2026-01-23 10:36:24.030 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:36:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:24.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:24 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4289693492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:36:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:36:24 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080446153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:36:24 compute-0 nova_compute[249229]: 2026-01-23 10:36:24.894 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.864s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:36:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:25.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:25 compute-0 nova_compute[249229]: 2026-01-23 10:36:25.036 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:36:25 compute-0 nova_compute[249229]: 2026-01-23 10:36:25.037 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4518MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:36:25 compute-0 nova_compute[249229]: 2026-01-23 10:36:25.037 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:36:25 compute-0 nova_compute[249229]: 2026-01-23 10:36:25.037 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:36:25 compute-0 ceph-mon[74335]: pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:25 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1368383346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:36:25 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1080446153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:36:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:25 compute-0 nova_compute[249229]: 2026-01-23 10:36:25.950 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:36:25 compute-0 nova_compute[249229]: 2026-01-23 10:36:25.951 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.051 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:36:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:26.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:36:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659114460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.534 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.540 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:36:26 compute-0 ceph-mon[74335]: pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.570 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.572 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.572 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.573 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.573 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.592 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.592 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.593 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 23 10:36:26 compute-0 nova_compute[249229]: 2026-01-23 10:36:26.717 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:36:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:36:26 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:36:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:27.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:36:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:36:27 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:36:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:36:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:36:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:36:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1659114460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:36:27 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:27 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:27 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:27.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:36:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:36:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:28.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:36:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:36:28 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:36:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:36:28 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:36:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:36:28 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:36:28 compute-0 sudo[285251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:36:28 compute-0 sudo[285251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:28 compute-0 sudo[285251]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:28 compute-0 sudo[285276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:36:28 compute-0 sudo[285276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:28 compute-0 nova_compute[249229]: 2026-01-23 10:36:28.621 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:28 compute-0 nova_compute[249229]: 2026-01-23 10:36:28.622 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:28 compute-0 nova_compute[249229]: 2026-01-23 10:36:28.622 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:28 compute-0 podman[285343]: 2026-01-23 10:36:28.681153643 +0000 UTC m=+0.041849480 container create b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:36:28 compute-0 ceph-mon[74335]: pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:36:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:36:28 compute-0 ceph-mon[74335]: pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:36:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:36:28 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:36:28 compute-0 systemd[1]: Started libpod-conmon-b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca.scope.
Jan 23 10:36:28 compute-0 podman[285343]: 2026-01-23 10:36:28.662084121 +0000 UTC m=+0.022779978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:36:28 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:36:28 compute-0 podman[285343]: 2026-01-23 10:36:28.792829223 +0000 UTC m=+0.153525090 container init b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:36:28 compute-0 podman[285343]: 2026-01-23 10:36:28.800280685 +0000 UTC m=+0.160976522 container start b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 10:36:28 compute-0 podman[285343]: 2026-01-23 10:36:28.803310401 +0000 UTC m=+0.164006258 container attach b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:36:28 compute-0 fervent_bartik[285359]: 167 167
Jan 23 10:36:28 compute-0 systemd[1]: libpod-b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca.scope: Deactivated successfully.
Jan 23 10:36:28 compute-0 conmon[285359]: conmon b159823b73fc8204e4d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca.scope/container/memory.events
Jan 23 10:36:28 compute-0 podman[285343]: 2026-01-23 10:36:28.808389955 +0000 UTC m=+0.169085812 container died b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 10:36:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-df97fd7d97cba4b26f8a4fafc35659cf83163fc5fcdd6573acf016f7ac744770-merged.mount: Deactivated successfully.
Jan 23 10:36:28 compute-0 podman[285343]: 2026-01-23 10:36:28.848841163 +0000 UTC m=+0.209537000 container remove b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:36:28 compute-0 systemd[1]: libpod-conmon-b159823b73fc8204e4d8ca074bc11e4f57fc67dd2f4eec9cbfcbd7eaeae805ca.scope: Deactivated successfully.
Jan 23 10:36:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:28.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:28 compute-0 nova_compute[249229]: 2026-01-23 10:36:28.981 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:29 compute-0 podman[285383]: 2026-01-23 10:36:29.007885349 +0000 UTC m=+0.042703934 container create d67b564a448c2e99f0b1a20c7095d72044206d81781363f762e9ddf5993d47d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_payne, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:36:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:29.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:29 compute-0 systemd[1]: Started libpod-conmon-d67b564a448c2e99f0b1a20c7095d72044206d81781363f762e9ddf5993d47d9.scope.
Jan 23 10:36:29 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b69926e6e18e73837e53be0e06f3a1dd0785c8d663ff6c6ea65ee64c531db8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b69926e6e18e73837e53be0e06f3a1dd0785c8d663ff6c6ea65ee64c531db8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b69926e6e18e73837e53be0e06f3a1dd0785c8d663ff6c6ea65ee64c531db8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b69926e6e18e73837e53be0e06f3a1dd0785c8d663ff6c6ea65ee64c531db8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1b69926e6e18e73837e53be0e06f3a1dd0785c8d663ff6c6ea65ee64c531db8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:29 compute-0 podman[285383]: 2026-01-23 10:36:28.989504267 +0000 UTC m=+0.024322872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:36:29 compute-0 podman[285383]: 2026-01-23 10:36:29.102083303 +0000 UTC m=+0.136901998 container init d67b564a448c2e99f0b1a20c7095d72044206d81781363f762e9ddf5993d47d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_payne, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:36:29 compute-0 podman[285383]: 2026-01-23 10:36:29.108906187 +0000 UTC m=+0.143724812 container start d67b564a448c2e99f0b1a20c7095d72044206d81781363f762e9ddf5993d47d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 10:36:29 compute-0 podman[285383]: 2026-01-23 10:36:29.1160445 +0000 UTC m=+0.150863255 container attach d67b564a448c2e99f0b1a20c7095d72044206d81781363f762e9ddf5993d47d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_payne, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:36:29 compute-0 sudo[285411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:36:29 compute-0 sudo[285411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:29 compute-0 sudo[285411]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:29 compute-0 competent_payne[285400]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:36:29 compute-0 competent_payne[285400]: --> All data devices are unavailable
Jan 23 10:36:29 compute-0 systemd[1]: libpod-d67b564a448c2e99f0b1a20c7095d72044206d81781363f762e9ddf5993d47d9.scope: Deactivated successfully.
Jan 23 10:36:29 compute-0 podman[285383]: 2026-01-23 10:36:29.480459636 +0000 UTC m=+0.515278221 container died d67b564a448c2e99f0b1a20c7095d72044206d81781363f762e9ddf5993d47d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_payne, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 23 10:36:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1b69926e6e18e73837e53be0e06f3a1dd0785c8d663ff6c6ea65ee64c531db8-merged.mount: Deactivated successfully.
Jan 23 10:36:29 compute-0 podman[285383]: 2026-01-23 10:36:29.520327117 +0000 UTC m=+0.555145702 container remove d67b564a448c2e99f0b1a20c7095d72044206d81781363f762e9ddf5993d47d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 10:36:29 compute-0 systemd[1]: libpod-conmon-d67b564a448c2e99f0b1a20c7095d72044206d81781363f762e9ddf5993d47d9.scope: Deactivated successfully.
Jan 23 10:36:29 compute-0 sudo[285276]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:29 compute-0 sudo[285456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:36:29 compute-0 sudo[285456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:29 compute-0 sudo[285456]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:29 compute-0 sudo[285481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:36:29 compute-0 sudo[285481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:29] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:36:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:29] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:36:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:30.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:30 compute-0 podman[285546]: 2026-01-23 10:36:30.084394522 +0000 UTC m=+0.042770485 container create 0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:36:30 compute-0 systemd[1]: Started libpod-conmon-0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92.scope.
Jan 23 10:36:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:36:30 compute-0 podman[285546]: 2026-01-23 10:36:30.067808891 +0000 UTC m=+0.026184874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:36:30 compute-0 podman[285546]: 2026-01-23 10:36:30.287593381 +0000 UTC m=+0.245969364 container init 0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamarr, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 23 10:36:30 compute-0 podman[285546]: 2026-01-23 10:36:30.295434624 +0000 UTC m=+0.253810587 container start 0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamarr, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:36:30 compute-0 podman[285546]: 2026-01-23 10:36:30.298519271 +0000 UTC m=+0.256895254 container attach 0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 23 10:36:30 compute-0 busy_lamarr[285563]: 167 167
Jan 23 10:36:30 compute-0 systemd[1]: libpod-0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92.scope: Deactivated successfully.
Jan 23 10:36:30 compute-0 conmon[285563]: conmon 0ec08d1a599272ade434 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92.scope/container/memory.events
Jan 23 10:36:30 compute-0 podman[285546]: 2026-01-23 10:36:30.301060403 +0000 UTC m=+0.259436366 container died 0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f04d17ea906b3e6e9271b4a2992293be4a8158cbf6d73c6fad7f23e7ef5229d5-merged.mount: Deactivated successfully.
Jan 23 10:36:30 compute-0 podman[285546]: 2026-01-23 10:36:30.342217132 +0000 UTC m=+0.300593095 container remove 0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:36:30 compute-0 systemd[1]: libpod-conmon-0ec08d1a599272ade4343ed49d5a6d40a0ff6121a47ff62422cef376bb818c92.scope: Deactivated successfully.
Jan 23 10:36:30 compute-0 podman[285588]: 2026-01-23 10:36:30.556153036 +0000 UTC m=+0.097734426 container create 0caa3805871786c5ccb3e13631ebfd0a0675b7f9e2c56a7c53bbd6c3e2926928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:36:30 compute-0 podman[285588]: 2026-01-23 10:36:30.480036895 +0000 UTC m=+0.021618315 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:36:30 compute-0 systemd[1]: Started libpod-conmon-0caa3805871786c5ccb3e13631ebfd0a0675b7f9e2c56a7c53bbd6c3e2926928.scope.
Jan 23 10:36:30 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92634fca940d0591d1028c172941256a45e98a8a6d676df4473bec8c14558ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92634fca940d0591d1028c172941256a45e98a8a6d676df4473bec8c14558ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92634fca940d0591d1028c172941256a45e98a8a6d676df4473bec8c14558ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92634fca940d0591d1028c172941256a45e98a8a6d676df4473bec8c14558ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:30 compute-0 podman[285588]: 2026-01-23 10:36:30.865290662 +0000 UTC m=+0.406872102 container init 0caa3805871786c5ccb3e13631ebfd0a0675b7f9e2c56a7c53bbd6c3e2926928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:36:30 compute-0 podman[285588]: 2026-01-23 10:36:30.872095836 +0000 UTC m=+0.413677226 container start 0caa3805871786c5ccb3e13631ebfd0a0675b7f9e2c56a7c53bbd6c3e2926928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:36:30 compute-0 podman[285588]: 2026-01-23 10:36:30.876099269 +0000 UTC m=+0.417680679 container attach 0caa3805871786c5ccb3e13631ebfd0a0675b7f9e2c56a7c53bbd6c3e2926928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:36:30 compute-0 ceph-mon[74335]: pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:31.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]: {
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:     "1": [
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:         {
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "devices": [
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "/dev/loop3"
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             ],
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "lv_name": "ceph_lv0",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "lv_size": "21470642176",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "name": "ceph_lv0",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "tags": {
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.cluster_name": "ceph",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.crush_device_class": "",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.encrypted": "0",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.osd_id": "1",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.type": "block",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.vdo": "0",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:                 "ceph.with_tpm": "0"
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             },
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "type": "block",
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:             "vg_name": "ceph_vg0"
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:         }
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]:     ]
Jan 23 10:36:31 compute-0 gallant_mirzakhani[285606]: }
Jan 23 10:36:31 compute-0 systemd[1]: libpod-0caa3805871786c5ccb3e13631ebfd0a0675b7f9e2c56a7c53bbd6c3e2926928.scope: Deactivated successfully.
Jan 23 10:36:31 compute-0 podman[285588]: 2026-01-23 10:36:31.182891248 +0000 UTC m=+0.724472648 container died 0caa3805871786c5ccb3e13631ebfd0a0675b7f9e2c56a7c53bbd6c3e2926928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f92634fca940d0591d1028c172941256a45e98a8a6d676df4473bec8c14558ac-merged.mount: Deactivated successfully.
Jan 23 10:36:31 compute-0 podman[285588]: 2026-01-23 10:36:31.222998007 +0000 UTC m=+0.764579397 container remove 0caa3805871786c5ccb3e13631ebfd0a0675b7f9e2c56a7c53bbd6c3e2926928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 10:36:31 compute-0 systemd[1]: libpod-conmon-0caa3805871786c5ccb3e13631ebfd0a0675b7f9e2c56a7c53bbd6c3e2926928.scope: Deactivated successfully.
Jan 23 10:36:31 compute-0 sudo[285481]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:31 compute-0 sudo[285625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:36:31 compute-0 sudo[285625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:31 compute-0 sudo[285625]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:31 compute-0 sudo[285650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:36:31 compute-0 sudo[285650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:31 compute-0 nova_compute[249229]: 2026-01-23 10:36:31.721 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:31 compute-0 podman[285713]: 2026-01-23 10:36:31.7676331 +0000 UTC m=+0.041318934 container create 3c345f7122cf26594d69b0a459379ce5843755b5f9eab1411472f9f8b81bde8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cohen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:36:31 compute-0 systemd[1]: Started libpod-conmon-3c345f7122cf26594d69b0a459379ce5843755b5f9eab1411472f9f8b81bde8c.scope.
Jan 23 10:36:31 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:36:31 compute-0 podman[285713]: 2026-01-23 10:36:31.840129258 +0000 UTC m=+0.113815122 container init 3c345f7122cf26594d69b0a459379ce5843755b5f9eab1411472f9f8b81bde8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:36:31 compute-0 podman[285713]: 2026-01-23 10:36:31.74895789 +0000 UTC m=+0.022643754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:36:31 compute-0 podman[285713]: 2026-01-23 10:36:31.846325804 +0000 UTC m=+0.120011648 container start 3c345f7122cf26594d69b0a459379ce5843755b5f9eab1411472f9f8b81bde8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cohen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:36:31 compute-0 podman[285713]: 2026-01-23 10:36:31.850016499 +0000 UTC m=+0.123702343 container attach 3c345f7122cf26594d69b0a459379ce5843755b5f9eab1411472f9f8b81bde8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cohen, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 10:36:31 compute-0 elegant_cohen[285729]: 167 167
Jan 23 10:36:31 compute-0 systemd[1]: libpod-3c345f7122cf26594d69b0a459379ce5843755b5f9eab1411472f9f8b81bde8c.scope: Deactivated successfully.
Jan 23 10:36:31 compute-0 podman[285713]: 2026-01-23 10:36:31.851051808 +0000 UTC m=+0.124737662 container died 3c345f7122cf26594d69b0a459379ce5843755b5f9eab1411472f9f8b81bde8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0ff3ed8c9eda92cdf165fe42f311f0aad638f73e9bbbf104ea80cdd71b9b601-merged.mount: Deactivated successfully.
Jan 23 10:36:31 compute-0 podman[285713]: 2026-01-23 10:36:31.881221735 +0000 UTC m=+0.154907579 container remove 3c345f7122cf26594d69b0a459379ce5843755b5f9eab1411472f9f8b81bde8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cohen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 10:36:31 compute-0 systemd[1]: libpod-conmon-3c345f7122cf26594d69b0a459379ce5843755b5f9eab1411472f9f8b81bde8c.scope: Deactivated successfully.
Jan 23 10:36:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:32.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:32 compute-0 podman[285754]: 2026-01-23 10:36:32.015810626 +0000 UTC m=+0.023099747 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:36:32 compute-0 podman[285754]: 2026-01-23 10:36:32.547244224 +0000 UTC m=+0.554533355 container create 16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sinoussi, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 10:36:32 compute-0 systemd[1]: Started libpod-conmon-16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571.scope.
Jan 23 10:36:32 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeb09f0dfc959ca5db1081b8f493c2ecdbc0752882646c5f8b5230f6b1ed70e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeb09f0dfc959ca5db1081b8f493c2ecdbc0752882646c5f8b5230f6b1ed70e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeb09f0dfc959ca5db1081b8f493c2ecdbc0752882646c5f8b5230f6b1ed70e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeb09f0dfc959ca5db1081b8f493c2ecdbc0752882646c5f8b5230f6b1ed70e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:36:32 compute-0 podman[285754]: 2026-01-23 10:36:32.722434068 +0000 UTC m=+0.729723189 container init 16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sinoussi, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:36:32 compute-0 podman[285754]: 2026-01-23 10:36:32.728929852 +0000 UTC m=+0.736218953 container start 16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 10:36:32 compute-0 podman[285754]: 2026-01-23 10:36:32.732236556 +0000 UTC m=+0.739525687 container attach 16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:36:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:33.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:33 compute-0 lvm[285846]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:36:33 compute-0 lvm[285846]: VG ceph_vg0 finished
Jan 23 10:36:33 compute-0 mystifying_sinoussi[285772]: {}
Jan 23 10:36:33 compute-0 systemd[1]: libpod-16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571.scope: Deactivated successfully.
Jan 23 10:36:33 compute-0 systemd[1]: libpod-16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571.scope: Consumed 1.054s CPU time.
Jan 23 10:36:33 compute-0 podman[285850]: 2026-01-23 10:36:33.426647131 +0000 UTC m=+0.024843646 container died 16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sinoussi, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 10:36:33 compute-0 ceph-mon[74335]: pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:33.772Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:36:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:33.774Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aeb09f0dfc959ca5db1081b8f493c2ecdbc0752882646c5f8b5230f6b1ed70e-merged.mount: Deactivated successfully.
Jan 23 10:36:33 compute-0 podman[285850]: 2026-01-23 10:36:33.94193127 +0000 UTC m=+0.540127745 container remove 16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sinoussi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:36:33 compute-0 systemd[1]: libpod-conmon-16bbcc237d6efe2572e35807e27d1e54ec2c88ab7e3200bbd185c4c13826d571.scope: Deactivated successfully.
Jan 23 10:36:33 compute-0 nova_compute[249229]: 2026-01-23 10:36:33.984 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:33 compute-0 sudo[285650]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:36:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:34.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:36:34 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:34 compute-0 sudo[285865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:36:34 compute-0 sudo[285865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:34 compute-0 sudo[285865]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:34 compute-0 ceph-mon[74335]: pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:34 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:35.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:36:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:36:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:35 compute-0 nova_compute[249229]: 2026-01-23 10:36:35.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:36:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:36:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:36:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:36.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:36 compute-0 nova_compute[249229]: 2026-01-23 10:36:36.724 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:36 compute-0 ceph-mon[74335]: pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:37.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:37.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:36:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:38.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:36:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:38.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:38 compute-0 nova_compute[249229]: 2026-01-23 10:36:38.987 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:39.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:39 compute-0 ceph-mon[74335]: pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 601 B/s rd, 0 op/s
Jan 23 10:36:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:39] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:36:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:39] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:36:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:40.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:41 compute-0 ceph-mon[74335]: pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:41.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:41 compute-0 podman[285896]: 2026-01-23 10:36:41.594213074 +0000 UTC m=+0.117785175 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 10:36:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:41 compute-0 nova_compute[249229]: 2026-01-23 10:36:41.725 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:36:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:36:42 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:36:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:43.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:36:43 compute-0 ceph-mon[74335]: pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:43.775Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:43 compute-0 nova_compute[249229]: 2026-01-23 10:36:43.987 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:36:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:44.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:36:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:45.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:45 compute-0 ceph-mon[74335]: pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:46.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:46 compute-0 nova_compute[249229]: 2026-01-23 10:36:46.744 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:47.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:47 compute-0 ceph-mon[74335]: pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:47.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:48.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:48 compute-0 ceph-mon[74335]: pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:36:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195883716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:36:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:36:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195883716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:36:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:48.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:48 compute-0 nova_compute[249229]: 2026-01-23 10:36:48.988 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:49.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2195883716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:36:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/2195883716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:36:49 compute-0 sudo[285931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:36:49 compute-0 sudo[285931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:36:49 compute-0 sudo[285931]: pam_unix(sudo:session): session closed for user root
Jan 23 10:36:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:49] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:36:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:49] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:36:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:36:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:36:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:50.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:36:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:36:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:36:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:36:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:36:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:36:50 compute-0 ceph-mon[74335]: pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:36:50 compute-0 podman[285957]: 2026-01-23 10:36:50.516113832 +0000 UTC m=+0.050210807 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 23 10:36:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:51.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:51 compute-0 nova_compute[249229]: 2026-01-23 10:36:51.749 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:52.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:52 compute-0 ceph-mon[74335]: pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:53.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:53.776Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:53 compute-0 nova_compute[249229]: 2026-01-23 10:36:53.991 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:54.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:54 compute-0 ceph-mon[74335]: pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:55.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:56.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:56 compute-0 nova_compute[249229]: 2026-01-23 10:36:56.752 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:57.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:57 compute-0 ceph-mon[74335]: pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:36:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:57.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:36:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:58.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:58 compute-0 ceph-mon[74335]: pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:36:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:58.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:36:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:36:58.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:36:58 compute-0 nova_compute[249229]: 2026-01-23 10:36:58.992 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:36:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:36:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:36:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:59.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:36:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:36:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:36:59.793 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:36:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:36:59.795 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:36:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:36:59.795 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:36:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:59] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:36:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:36:59] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:37:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:00.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:00 compute-0 ceph-mon[74335]: pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:01.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:01 compute-0 nova_compute[249229]: 2026-01-23 10:37:01.755 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:37:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:02.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:37:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:37:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:03.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:37:03 compute-0 ceph-mon[74335]: pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:03.777Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:03 compute-0 nova_compute[249229]: 2026-01-23 10:37:03.994 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:04.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:05.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:37:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:37:05 compute-0 ceph-mon[74335]: pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:37:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:06.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:06 compute-0 ceph-mon[74335]: pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:06 compute-0 nova_compute[249229]: 2026-01-23 10:37:06.759 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:07.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:07.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:37:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:07.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:37:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:07.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:08.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:08.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:08 compute-0 nova_compute[249229]: 2026-01-23 10:37:08.997 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:09.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:09 compute-0 ceph-mon[74335]: pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:09 compute-0 sudo[285997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:37:09 compute-0 sudo[285997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:09 compute-0 sudo[285997]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:09] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:37:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:09] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:37:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:10.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:10 compute-0 ceph-mon[74335]: pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:11.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:11 compute-0 nova_compute[249229]: 2026-01-23 10:37:11.763 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:12.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:12 compute-0 podman[286025]: 2026-01-23 10:37:12.548472336 +0000 UTC m=+0.082204486 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Jan 23 10:37:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:13.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:13 compute-0 ceph-mon[74335]: pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:13 compute-0 nova_compute[249229]: 2026-01-23 10:37:13.734 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:13.777Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:14 compute-0 nova_compute[249229]: 2026-01-23 10:37:13.999 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:14.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:14 compute-0 ceph-mon[74335]: pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:15.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:16.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:16 compute-0 nova_compute[249229]: 2026-01-23 10:37:16.766 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:16 compute-0 ceph-mon[74335]: pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:17.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:17 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:17.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:18.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:18 compute-0 nova_compute[249229]: 2026-01-23 10:37:18.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:18 compute-0 nova_compute[249229]: 2026-01-23 10:37:18.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:37:18 compute-0 nova_compute[249229]: 2026-01-23 10:37:18.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:37:18 compute-0 nova_compute[249229]: 2026-01-23 10:37:18.736 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:37:18 compute-0 nova_compute[249229]: 2026-01-23 10:37:18.737 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:18 compute-0 nova_compute[249229]: 2026-01-23 10:37:18.737 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:37:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:18.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:19 compute-0 nova_compute[249229]: 2026-01-23 10:37:18.999 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:19.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:19 compute-0 ceph-mon[74335]: pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:19] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:37:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:19] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:37:20
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['volumes', '.nfs', 'images', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:37:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:37:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:37:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:20.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:37:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:37:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:37:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:21.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:21 compute-0 ceph-mon[74335]: pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:21 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2129370149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:37:21 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3810051989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:37:21 compute-0 podman[286062]: 2026-01-23 10:37:21.517486223 +0000 UTC m=+0.048714057 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 10:37:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:21 compute-0 nova_compute[249229]: 2026-01-23 10:37:21.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:21 compute-0 nova_compute[249229]: 2026-01-23 10:37:21.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:21 compute-0 nova_compute[249229]: 2026-01-23 10:37:21.770 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:22.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:22 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:22 compute-0 ceph-mon[74335]: pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:23.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:23 compute-0 nova_compute[249229]: 2026-01-23 10:37:23.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:23.778Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.000 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.104 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.105 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.105 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.106 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.106 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:37:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:24.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:24 compute-0 ceph-mon[74335]: pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:37:24 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870511584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.764 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.920 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.921 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4547MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.922 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:37:24 compute-0 nova_compute[249229]: 2026-01-23 10:37:24.922 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:37:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:25.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:25 compute-0 nova_compute[249229]: 2026-01-23 10:37:25.405 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:37:25 compute-0 nova_compute[249229]: 2026-01-23 10:37:25.406 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:37:25 compute-0 nova_compute[249229]: 2026-01-23 10:37:25.471 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing inventories for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 23 10:37:25 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3870511584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:37:25 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/583159563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:37:25 compute-0 nova_compute[249229]: 2026-01-23 10:37:25.498 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating ProviderTree inventory for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 23 10:37:25 compute-0 nova_compute[249229]: 2026-01-23 10:37:25.499 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Updating inventory in ProviderTree for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 23 10:37:25 compute-0 nova_compute[249229]: 2026-01-23 10:37:25.520 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing aggregate associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 23 10:37:25 compute-0 nova_compute[249229]: 2026-01-23 10:37:25.545 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Refreshing trait associations for resource provider a1f82a16-d7e7-4500-99d7-a20de995d9a2, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_FMA3,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 23 10:37:25 compute-0 nova_compute[249229]: 2026-01-23 10:37:25.596 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:37:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:37:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220990230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:37:26 compute-0 nova_compute[249229]: 2026-01-23 10:37:26.081 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:37:26 compute-0 nova_compute[249229]: 2026-01-23 10:37:26.088 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:37:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:26.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:26 compute-0 nova_compute[249229]: 2026-01-23 10:37:26.131 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:37:26 compute-0 nova_compute[249229]: 2026-01-23 10:37:26.133 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:37:26 compute-0 nova_compute[249229]: 2026-01-23 10:37:26.134 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:37:26 compute-0 ceph-mon[74335]: pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/574915668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:37:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3220990230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:37:26 compute-0 nova_compute[249229]: 2026-01-23 10:37:26.773 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:27.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:27.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:27 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:28.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:28.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:29 compute-0 nova_compute[249229]: 2026-01-23 10:37:29.002 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:29.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:29 compute-0 nova_compute[249229]: 2026-01-23 10:37:29.126 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:29 compute-0 nova_compute[249229]: 2026-01-23 10:37:29.127 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:29 compute-0 ceph-mon[74335]: pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:29 compute-0 sudo[286133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:37:29 compute-0 sudo[286133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:29 compute-0 sudo[286133]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:29] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:37:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:29] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:37:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:30.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:30 compute-0 nova_compute[249229]: 2026-01-23 10:37:30.203 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:30 compute-0 nova_compute[249229]: 2026-01-23 10:37:30.204 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:37:30 compute-0 ceph-mon[74335]: pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:31.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:31 compute-0 nova_compute[249229]: 2026-01-23 10:37:31.779 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:32.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:32 compute-0 ceph-mon[74335]: pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:32 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:37:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:33.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:37:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:33.779Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:34 compute-0 nova_compute[249229]: 2026-01-23 10:37:34.002 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:35 compute-0 sudo[286164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:37:35 compute-0 sudo[286164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:35 compute-0 sudo[286164]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:37:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:37:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:35.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:35 compute-0 ceph-mon[74335]: pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:37:35 compute-0 sudo[286189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 23 10:37:35 compute-0 sudo[286189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:35 compute-0 podman[286286]: 2026-01-23 10:37:35.72819465 +0000 UTC m=+0.058519218 container exec cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:37:35 compute-0 podman[286286]: 2026-01-23 10:37:35.829725039 +0000 UTC m=+0.160049577 container exec_died cbfd7f9a2ad9887ed3adf829b401ebf60f670e8aa91916ede409f75a12aeb3e6 (image=quay.io/ceph/ceph:v19, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:37:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:37:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:37:36 compute-0 podman[286406]: 2026-01-23 10:37:36.272050501 +0000 UTC m=+0.049803308 container exec 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:37:36 compute-0 podman[286406]: 2026-01-23 10:37:36.284736004 +0000 UTC m=+0.062488791 container exec_died 97848d12ab6322cfe3cc805f7972048af088977ee2e693bcc0a5bb581613a0d8 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:37:36 compute-0 podman[286545]: 2026-01-23 10:37:36.771724246 +0000 UTC m=+0.054613166 container exec 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 10:37:36 compute-0 nova_compute[249229]: 2026-01-23 10:37:36.781 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:36 compute-0 podman[286545]: 2026-01-23 10:37:36.783753031 +0000 UTC m=+0.066641931 container exec_died 2675dd2af0d87249968094bd6c1eb5d25ac7173a5fd992ed1bd309216a505178 (image=quay.io/ceph/haproxy:2.3, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-haproxy-nfs-cephfs-compute-0-yeogal)
Jan 23 10:37:36 compute-0 podman[286609]: 2026-01-23 10:37:36.98477229 +0000 UTC m=+0.055204263 container exec 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, vendor=Red Hat, Inc., version=2.2.4, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, description=keepalived for Ceph, com.redhat.component=keepalived-container, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 23 10:37:36 compute-0 podman[286609]: 2026-01-23 10:37:36.999691967 +0000 UTC m=+0.070123910 container exec_died 4783fa4a0e03b39b894c380a1696ad8cf3e72c4e94fd2480817944beb7891609 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-keepalived-nfs-cephfs-compute-0-lrsdkc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 23 10:37:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:37.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:37 compute-0 podman[286673]: 2026-01-23 10:37:37.190141803 +0000 UTC m=+0.048700236 container exec a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:37:37 compute-0 podman[286673]: 2026-01-23 10:37:37.224553709 +0000 UTC m=+0.083112102 container exec_died a2ddeb968d99b1961970b788dda423e2c0177d966be5d6216090bc7f97658982 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:37:37 compute-0 ceph-mon[74335]: pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:37.897Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:37:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:37.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:37:37 compute-0 podman[286748]: 2026-01-23 10:37:37.943155626 +0000 UTC m=+0.568500218 container exec 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 10:37:37 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:38.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:38 compute-0 podman[286748]: 2026-01-23 10:37:38.336848895 +0000 UTC m=+0.962193467 container exec_died 91a745e69178e4c0b1322185fa504c92fadf052d1491b19cbcad743ff263de28 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 23 10:37:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:38.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:37:39 compute-0 nova_compute[249229]: 2026-01-23 10:37:39.004 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:39.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:39] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:37:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:39] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:37:39 compute-0 ceph-mon[74335]: pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:37:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:40.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:37:40 compute-0 podman[286867]: 2026-01-23 10:37:40.267469924 +0000 UTC m=+0.315668014 container exec 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:37:40 compute-0 podman[286867]: 2026-01-23 10:37:40.317405845 +0000 UTC m=+0.365603915 container exec_died 8d18c97a753c8610913f4f1be41e91ee0fa0045d3fad5bb17483ebd74168eedf (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f3005f84-239a-55b6-a948-8f1fb592b920-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 23 10:37:40 compute-0 sudo[286189]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:37:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:40 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:37:40 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:40 compute-0 sudo[286913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:37:40 compute-0 sudo[286913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:40 compute-0 sudo[286913]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:40 compute-0 sudo[286938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:37:40 compute-0 sudo[286938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:41 compute-0 ceph-mon[74335]: pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:41 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:41.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:41 compute-0 sudo[286938]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:37:41 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:37:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:37:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:37:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s
Jan 23 10:37:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:37:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:37:41 compute-0 nova_compute[249229]: 2026-01-23 10:37:41.784 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:37:41 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:37:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:37:41 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:37:41 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:37:41 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:37:41 compute-0 sudo[286994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:37:41 compute-0 sudo[286994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:41 compute-0 sudo[286994]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:41 compute-0 sudo[287019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:37:41 compute-0 sudo[287019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:37:42 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8480 writes, 37K keys, 8474 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 8480 writes, 8474 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1524 writes, 7109 keys, 1524 commit groups, 1.0 writes per commit group, ingest: 11.77 MB, 0.02 MB/s
                                           Interval WAL: 1524 writes, 1524 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     44.9      1.24              0.39        22    0.056       0      0       0.0       0.0
                                             L6      1/0   12.51 MB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   4.7     79.6     68.3      3.85              0.77        21    0.183    124K    11K       0.0       0.0
                                            Sum      1/0   12.51 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.7     60.2     62.6      5.09              1.16        43    0.118    124K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9     86.0     86.6      0.86              0.22        10    0.086     36K   3006       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   0.0     79.6     68.3      3.85              0.77        21    0.183    124K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.0      1.24              0.39        21    0.059       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.9      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.054, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.31 GB write, 0.11 MB/s write, 0.30 GB read, 0.10 MB/s read, 5.1 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5569ddb77350#2 capacity: 304.00 MB usage: 28.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000303 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1733,27.85 MB,9.16068%) FilterBlock(44,363.30 KB,0.116705%) IndexBlock(44,586.72 KB,0.188476%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 23 10:37:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:42.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:42 compute-0 podman[287089]: 2026-01-23 10:37:42.296153114 +0000 UTC m=+0.022878297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:37:42 compute-0 podman[287089]: 2026-01-23 10:37:42.539919218 +0000 UTC m=+0.266644381 container create e0f8611ca7ed91001cf449fa3292899c5038ac5ca83023796f1572239c97d88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_noyce, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 10:37:42 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:37:42 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:37:42 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:42 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:42 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:37:42 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:37:42 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:37:42 compute-0 systemd[1]: Started libpod-conmon-e0f8611ca7ed91001cf449fa3292899c5038ac5ca83023796f1572239c97d88f.scope.
Jan 23 10:37:42 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:37:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:43.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:43 compute-0 podman[287089]: 2026-01-23 10:37:43.345241908 +0000 UTC m=+1.071967111 container init e0f8611ca7ed91001cf449fa3292899c5038ac5ca83023796f1572239c97d88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_noyce, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 23 10:37:43 compute-0 podman[287089]: 2026-01-23 10:37:43.352137436 +0000 UTC m=+1.078862609 container start e0f8611ca7ed91001cf449fa3292899c5038ac5ca83023796f1572239c97d88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_noyce, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 10:37:43 compute-0 brave_noyce[287106]: 167 167
Jan 23 10:37:43 compute-0 systemd[1]: libpod-e0f8611ca7ed91001cf449fa3292899c5038ac5ca83023796f1572239c97d88f.scope: Deactivated successfully.
Jan 23 10:37:43 compute-0 podman[287089]: 2026-01-23 10:37:43.480692609 +0000 UTC m=+1.207417772 container attach e0f8611ca7ed91001cf449fa3292899c5038ac5ca83023796f1572239c97d88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_noyce, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 10:37:43 compute-0 podman[287089]: 2026-01-23 10:37:43.48179591 +0000 UTC m=+1.208521063 container died e0f8611ca7ed91001cf449fa3292899c5038ac5ca83023796f1572239c97d88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:37:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 517 B/s rd, 0 op/s
Jan 23 10:37:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:43.781Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:43 compute-0 ceph-mon[74335]: pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s
Jan 23 10:37:44 compute-0 nova_compute[249229]: 2026-01-23 10:37:44.006 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:44.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d2a606118607899a651f52cb4ea645eebc04fd60d5d6b6a87d3451c8f34eb62-merged.mount: Deactivated successfully.
Jan 23 10:37:44 compute-0 podman[287089]: 2026-01-23 10:37:44.742356944 +0000 UTC m=+2.469082107 container remove e0f8611ca7ed91001cf449fa3292899c5038ac5ca83023796f1572239c97d88f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:37:44 compute-0 podman[287107]: 2026-01-23 10:37:44.815084307 +0000 UTC m=+1.823804200 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true)
Jan 23 10:37:44 compute-0 systemd[1]: libpod-conmon-e0f8611ca7ed91001cf449fa3292899c5038ac5ca83023796f1572239c97d88f.scope: Deactivated successfully.
Jan 23 10:37:44 compute-0 ceph-mon[74335]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 517 B/s rd, 0 op/s
Jan 23 10:37:44 compute-0 podman[287158]: 2026-01-23 10:37:44.908761481 +0000 UTC m=+0.042653573 container create 43d5e2ea33ccdbcb037112b2d593a780dfe232292730fc6f70d765ab468ba58b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_tu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:37:44 compute-0 systemd[1]: Started libpod-conmon-43d5e2ea33ccdbcb037112b2d593a780dfe232292730fc6f70d765ab468ba58b.scope.
Jan 23 10:37:44 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:37:44 compute-0 podman[287158]: 2026-01-23 10:37:44.890819247 +0000 UTC m=+0.024711359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b2bef086c8d862c6e445533e3f117026a549cb4cd28353b1a5828d6b8807bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b2bef086c8d862c6e445533e3f117026a549cb4cd28353b1a5828d6b8807bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b2bef086c8d862c6e445533e3f117026a549cb4cd28353b1a5828d6b8807bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b2bef086c8d862c6e445533e3f117026a549cb4cd28353b1a5828d6b8807bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b2bef086c8d862c6e445533e3f117026a549cb4cd28353b1a5828d6b8807bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:45.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:45 compute-0 podman[287158]: 2026-01-23 10:37:45.318688665 +0000 UTC m=+0.452580777 container init 43d5e2ea33ccdbcb037112b2d593a780dfe232292730fc6f70d765ab468ba58b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 23 10:37:45 compute-0 podman[287158]: 2026-01-23 10:37:45.325656035 +0000 UTC m=+0.459548127 container start 43d5e2ea33ccdbcb037112b2d593a780dfe232292730fc6f70d765ab468ba58b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_tu, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:37:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 517 B/s rd, 0 op/s
Jan 23 10:37:45 compute-0 podman[287158]: 2026-01-23 10:37:45.570954682 +0000 UTC m=+0.704846804 container attach 43d5e2ea33ccdbcb037112b2d593a780dfe232292730fc6f70d765ab468ba58b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 10:37:45 compute-0 sleepy_tu[287174]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:37:45 compute-0 sleepy_tu[287174]: --> All data devices are unavailable
Jan 23 10:37:45 compute-0 systemd[1]: libpod-43d5e2ea33ccdbcb037112b2d593a780dfe232292730fc6f70d765ab468ba58b.scope: Deactivated successfully.
Jan 23 10:37:45 compute-0 podman[287158]: 2026-01-23 10:37:45.665777729 +0000 UTC m=+0.799669841 container died 43d5e2ea33ccdbcb037112b2d593a780dfe232292730fc6f70d765ab468ba58b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 10:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1b2bef086c8d862c6e445533e3f117026a549cb4cd28353b1a5828d6b8807bc-merged.mount: Deactivated successfully.
Jan 23 10:37:45 compute-0 podman[287158]: 2026-01-23 10:37:45.725948723 +0000 UTC m=+0.859840815 container remove 43d5e2ea33ccdbcb037112b2d593a780dfe232292730fc6f70d765ab468ba58b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Jan 23 10:37:45 compute-0 systemd[1]: libpod-conmon-43d5e2ea33ccdbcb037112b2d593a780dfe232292730fc6f70d765ab468ba58b.scope: Deactivated successfully.
Jan 23 10:37:45 compute-0 sudo[287019]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:45 compute-0 sudo[287202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:37:45 compute-0 sudo[287202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:45 compute-0 sudo[287202]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:45 compute-0 sudo[287227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:37:45 compute-0 sudo[287227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:46.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:46 compute-0 podman[287293]: 2026-01-23 10:37:46.251243402 +0000 UTC m=+0.023373231 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:37:46 compute-0 podman[287293]: 2026-01-23 10:37:46.515183493 +0000 UTC m=+0.287313312 container create 76717edc148c98203b5ab5905a2227d80ede0164e0300c4f26804a1fca836c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:37:46 compute-0 nova_compute[249229]: 2026-01-23 10:37:46.787 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:46 compute-0 systemd[1]: Started libpod-conmon-76717edc148c98203b5ab5905a2227d80ede0164e0300c4f26804a1fca836c32.scope.
Jan 23 10:37:46 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:37:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:47.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:47 compute-0 ceph-mon[74335]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 517 B/s rd, 0 op/s
Jan 23 10:37:47 compute-0 podman[287293]: 2026-01-23 10:37:47.157316799 +0000 UTC m=+0.929446618 container init 76717edc148c98203b5ab5905a2227d80ede0164e0300c4f26804a1fca836c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:37:47 compute-0 podman[287293]: 2026-01-23 10:37:47.165907145 +0000 UTC m=+0.938036944 container start 76717edc148c98203b5ab5905a2227d80ede0164e0300c4f26804a1fca836c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 10:37:47 compute-0 podman[287293]: 2026-01-23 10:37:47.170337682 +0000 UTC m=+0.942467491 container attach 76717edc148c98203b5ab5905a2227d80ede0164e0300c4f26804a1fca836c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:37:47 compute-0 recursing_boyd[287312]: 167 167
Jan 23 10:37:47 compute-0 systemd[1]: libpod-76717edc148c98203b5ab5905a2227d80ede0164e0300c4f26804a1fca836c32.scope: Deactivated successfully.
Jan 23 10:37:47 compute-0 podman[287293]: 2026-01-23 10:37:47.172464613 +0000 UTC m=+0.944594412 container died 76717edc148c98203b5ab5905a2227d80ede0164e0300c4f26804a1fca836c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 23 10:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d84b29a92e821135619342cff52ef3c420c38b4f0300214a55ee11d2fdcb2d19-merged.mount: Deactivated successfully.
Jan 23 10:37:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s
Jan 23 10:37:47 compute-0 podman[287293]: 2026-01-23 10:37:47.586188706 +0000 UTC m=+1.358318495 container remove 76717edc148c98203b5ab5905a2227d80ede0164e0300c4f26804a1fca836c32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:37:47 compute-0 systemd[1]: libpod-conmon-76717edc148c98203b5ab5905a2227d80ede0164e0300c4f26804a1fca836c32.scope: Deactivated successfully.
Jan 23 10:37:47 compute-0 podman[287339]: 2026-01-23 10:37:47.740701373 +0000 UTC m=+0.023131564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:37:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:47.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:47 compute-0 podman[287339]: 2026-01-23 10:37:47.909004125 +0000 UTC m=+0.191434296 container create b0435605bed5104856b07b18576c4374058b1b9b61489c33a1bfdc1f01e59c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:37:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:48.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:48 compute-0 systemd[1]: Started libpod-conmon-b0435605bed5104856b07b18576c4374058b1b9b61489c33a1bfdc1f01e59c78.scope.
Jan 23 10:37:48 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd644f772c55f251030f439014247d1c3d40c67fb3cbc34afa7839846e83496a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd644f772c55f251030f439014247d1c3d40c67fb3cbc34afa7839846e83496a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd644f772c55f251030f439014247d1c3d40c67fb3cbc34afa7839846e83496a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd644f772c55f251030f439014247d1c3d40c67fb3cbc34afa7839846e83496a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:37:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/980483264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:37:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:37:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/980483264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:37:48 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 23 10:37:48 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:48.876354) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:37:48 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 23 10:37:48 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164668876505, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1226, "num_deletes": 257, "total_data_size": 2264580, "memory_usage": 2310832, "flush_reason": "Manual Compaction"}
Jan 23 10:37:48 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 23 10:37:48 compute-0 podman[287339]: 2026-01-23 10:37:48.880537418 +0000 UTC m=+1.162967609 container init b0435605bed5104856b07b18576c4374058b1b9b61489c33a1bfdc1f01e59c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 23 10:37:48 compute-0 podman[287339]: 2026-01-23 10:37:48.890557395 +0000 UTC m=+1.172987566 container start b0435605bed5104856b07b18576c4374058b1b9b61489c33a1bfdc1f01e59c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 23 10:37:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:48.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:49 compute-0 nova_compute[249229]: 2026-01-23 10:37:49.007 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:37:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:49.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]: {
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:     "1": [
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:         {
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "devices": [
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "/dev/loop3"
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             ],
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "lv_name": "ceph_lv0",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "lv_size": "21470642176",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "name": "ceph_lv0",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "tags": {
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.cluster_name": "ceph",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.crush_device_class": "",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.encrypted": "0",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.osd_id": "1",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.type": "block",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.vdo": "0",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:                 "ceph.with_tpm": "0"
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             },
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "type": "block",
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:             "vg_name": "ceph_vg0"
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:         }
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]:     ]
Jan 23 10:37:49 compute-0 elastic_dewdney[287356]: }
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164669187950, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2198714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37217, "largest_seqno": 38442, "table_properties": {"data_size": 2192833, "index_size": 3208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12447, "raw_average_key_size": 19, "raw_value_size": 2180973, "raw_average_value_size": 3450, "num_data_blocks": 137, "num_entries": 632, "num_filter_entries": 632, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164555, "oldest_key_time": 1769164555, "file_creation_time": 1769164668, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 311596 microseconds, and 6264 cpu microseconds.
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:37:49 compute-0 systemd[1]: libpod-b0435605bed5104856b07b18576c4374058b1b9b61489c33a1bfdc1f01e59c78.scope: Deactivated successfully.
Jan 23 10:37:49 compute-0 podman[287339]: 2026-01-23 10:37:49.225748538 +0000 UTC m=+1.508178709 container attach b0435605bed5104856b07b18576c4374058b1b9b61489c33a1bfdc1f01e59c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:37:49 compute-0 podman[287339]: 2026-01-23 10:37:49.226901851 +0000 UTC m=+1.509332042 container died b0435605bed5104856b07b18576c4374058b1b9b61489c33a1bfdc1f01e59c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:49.188013) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2198714 bytes OK
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:49.188049) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:49.249339) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:49.249532) EVENT_LOG_v1 {"time_micros": 1769164669249515, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:49.249570) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2259125, prev total WAL file size 2276845, number of live WAL files 2.
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:49.251828) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303130' seq:72057594037927935, type:22 .. '6C6F676D0031323633' seq:0, type:0; will stop at (end)
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2147KB)], [80(12MB)]
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164669251942, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15313380, "oldest_snapshot_seqno": -1}
Jan 23 10:37:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 517 B/s rd, 0 op/s
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6939 keys, 15160652 bytes, temperature: kUnknown
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164669749992, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 15160652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15114116, "index_size": 28056, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 182716, "raw_average_key_size": 26, "raw_value_size": 14988895, "raw_average_value_size": 2160, "num_data_blocks": 1100, "num_entries": 6939, "num_filter_entries": 6939, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769164669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:37:49 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:37:49 compute-0 sudo[287374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:37:49 compute-0 sudo[287374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:49 compute-0 sudo[287374]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:49 compute-0 ceph-mon[74335]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s
Jan 23 10:37:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:49] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:37:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:49] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:49.750461) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15160652 bytes
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:50.017010) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.7 rd, 30.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 12.5 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(13.9) write-amplify(6.9) OK, records in: 7469, records dropped: 530 output_compression: NoCompression
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:50.017062) EVENT_LOG_v1 {"time_micros": 1769164670017041, "job": 46, "event": "compaction_finished", "compaction_time_micros": 498208, "compaction_time_cpu_micros": 36905, "output_level": 6, "num_output_files": 1, "total_output_size": 15160652, "num_input_records": 7469, "num_output_records": 6939, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164670017756, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164670020021, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:49.251168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:50.020115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:50.020121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:50.020123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:50.020125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:37:50 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:37:50.020127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:37:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:37:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:37:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:37:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:37:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:37:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:37:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:50.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:37:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd644f772c55f251030f439014247d1c3d40c67fb3cbc34afa7839846e83496a-merged.mount: Deactivated successfully.
Jan 23 10:37:50 compute-0 podman[287339]: 2026-01-23 10:37:50.705004945 +0000 UTC m=+2.987435116 container remove b0435605bed5104856b07b18576c4374058b1b9b61489c33a1bfdc1f01e59c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:37:50 compute-0 sudo[287227]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:50 compute-0 systemd[1]: libpod-conmon-b0435605bed5104856b07b18576c4374058b1b9b61489c33a1bfdc1f01e59c78.scope: Deactivated successfully.
Jan 23 10:37:50 compute-0 sudo[287403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:37:50 compute-0 sudo[287403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:50 compute-0 sudo[287403]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:50 compute-0 sudo[287428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:37:50 compute-0 sudo[287428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:51 compute-0 podman[287490]: 2026-01-23 10:37:51.280911564 +0000 UTC m=+0.022672030 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:37:51 compute-0 podman[287490]: 2026-01-23 10:37:51.44620765 +0000 UTC m=+0.187968116 container create 85d57dba7762d037b629343c6c1a89676301b1841c0755feb237f3506e4ddea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_austin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 23 10:37:51 compute-0 systemd[1]: Started libpod-conmon-85d57dba7762d037b629343c6c1a89676301b1841c0755feb237f3506e4ddea2.scope.
Jan 23 10:37:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:37:51 compute-0 podman[287490]: 2026-01-23 10:37:51.543196548 +0000 UTC m=+0.284957024 container init 85d57dba7762d037b629343c6c1a89676301b1841c0755feb237f3506e4ddea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_austin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:37:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s
Jan 23 10:37:51 compute-0 podman[287490]: 2026-01-23 10:37:51.551165527 +0000 UTC m=+0.292925973 container start 85d57dba7762d037b629343c6c1a89676301b1841c0755feb237f3506e4ddea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 23 10:37:51 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/980483264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:37:51 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/980483264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:37:51 compute-0 ceph-mon[74335]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 517 B/s rd, 0 op/s
Jan 23 10:37:51 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:37:51 compute-0 exciting_austin[287506]: 167 167
Jan 23 10:37:51 compute-0 podman[287490]: 2026-01-23 10:37:51.556663824 +0000 UTC m=+0.298424300 container attach 85d57dba7762d037b629343c6c1a89676301b1841c0755feb237f3506e4ddea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_austin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:37:51 compute-0 systemd[1]: libpod-85d57dba7762d037b629343c6c1a89676301b1841c0755feb237f3506e4ddea2.scope: Deactivated successfully.
Jan 23 10:37:51 compute-0 podman[287490]: 2026-01-23 10:37:51.55791773 +0000 UTC m=+0.299678206 container died 85d57dba7762d037b629343c6c1a89676301b1841c0755feb237f3506e4ddea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_austin, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:37:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4db2cb6a8eb8295b533448c2dedb84c12b8eb57fa94bac21c3c19e0f924e7ae5-merged.mount: Deactivated successfully.
Jan 23 10:37:51 compute-0 podman[287490]: 2026-01-23 10:37:51.607967284 +0000 UTC m=+0.349727730 container remove 85d57dba7762d037b629343c6c1a89676301b1841c0755feb237f3506e4ddea2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_austin, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 10:37:51 compute-0 podman[287509]: 2026-01-23 10:37:51.6144447 +0000 UTC m=+0.065740235 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 23 10:37:51 compute-0 systemd[1]: libpod-conmon-85d57dba7762d037b629343c6c1a89676301b1841c0755feb237f3506e4ddea2.scope: Deactivated successfully.
Jan 23 10:37:51 compute-0 podman[287547]: 2026-01-23 10:37:51.769866722 +0000 UTC m=+0.041577712 container create b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_franklin, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 10:37:51 compute-0 nova_compute[249229]: 2026-01-23 10:37:51.791 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:51 compute-0 systemd[1]: Started libpod-conmon-b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964.scope.
Jan 23 10:37:51 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc621dacf125b035a7f00fb155f5cd8afcad9e39086ec3f85e14855ac42c7658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc621dacf125b035a7f00fb155f5cd8afcad9e39086ec3f85e14855ac42c7658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc621dacf125b035a7f00fb155f5cd8afcad9e39086ec3f85e14855ac42c7658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc621dacf125b035a7f00fb155f5cd8afcad9e39086ec3f85e14855ac42c7658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:37:51 compute-0 podman[287547]: 2026-01-23 10:37:51.751552287 +0000 UTC m=+0.023263297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:37:51 compute-0 podman[287547]: 2026-01-23 10:37:51.846127637 +0000 UTC m=+0.117838627 container init b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_franklin, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 10:37:51 compute-0 podman[287547]: 2026-01-23 10:37:51.855956678 +0000 UTC m=+0.127667668 container start b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 23 10:37:51 compute-0 podman[287547]: 2026-01-23 10:37:51.860065496 +0000 UTC m=+0.131776516 container attach b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 10:37:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:52.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:52 compute-0 lvm[287638]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:37:52 compute-0 lvm[287638]: VG ceph_vg0 finished
Jan 23 10:37:52 compute-0 inspiring_franklin[287562]: {}
Jan 23 10:37:52 compute-0 systemd[1]: libpod-b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964.scope: Deactivated successfully.
Jan 23 10:37:52 compute-0 systemd[1]: libpod-b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964.scope: Consumed 1.226s CPU time.
Jan 23 10:37:52 compute-0 podman[287547]: 2026-01-23 10:37:52.652138488 +0000 UTC m=+0.923849478 container died b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_franklin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:37:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:53.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc621dacf125b035a7f00fb155f5cd8afcad9e39086ec3f85e14855ac42c7658-merged.mount: Deactivated successfully.
Jan 23 10:37:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:53.782Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:53 compute-0 podman[287547]: 2026-01-23 10:37:53.872487568 +0000 UTC m=+2.144198558 container remove b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_franklin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 10:37:53 compute-0 sudo[287428]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:53 compute-0 systemd[1]: libpod-conmon-b02871394b01353adc0fe8a6fe70f200a786ff49dc4b56e4689a01dd674fd964.scope: Deactivated successfully.
Jan 23 10:37:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:37:54 compute-0 nova_compute[249229]: 2026-01-23 10:37:54.008 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:54.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:37:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:55.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:37:55 compute-0 ceph-mon[74335]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s
Jan 23 10:37:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:37:56 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:56.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:56 compute-0 sudo[287660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:37:56 compute-0 sudo[287660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:37:56 compute-0 sudo[287660]: pam_unix(sudo:session): session closed for user root
Jan 23 10:37:56 compute-0 ceph-mon[74335]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:56 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:37:56 compute-0 nova_compute[249229]: 2026-01-23 10:37:56.795 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:57.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:57.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:58 compute-0 ceph-mon[74335]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:37:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:37:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:37:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:37:58.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:37:59 compute-0 nova_compute[249229]: 2026-01-23 10:37:59.009 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:37:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:37:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:37:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:59.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:37:59 compute-0 ceph-mon[74335]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:37:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:37:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:37:59.794 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:37:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:37:59.795 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:37:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:37:59.795 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:37:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:59] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:37:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:37:59] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:38:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:00.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:00 compute-0 ceph-mon[74335]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:01.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:01 compute-0 nova_compute[249229]: 2026-01-23 10:38:01.799 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:02.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:03 compute-0 ceph-mon[74335]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:03.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:03.784Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:04 compute-0 nova_compute[249229]: 2026-01-23 10:38:04.012 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:04.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:38:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:38:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:05.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:05 compute-0 ceph-mon[74335]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:38:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:06.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:06 compute-0 nova_compute[249229]: 2026-01-23 10:38:06.803 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:07.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:07 compute-0 ceph-mon[74335]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:07.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:38:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:07.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:38:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:07.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:38:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:08.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:08 compute-0 ceph-mon[74335]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:08.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:38:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:08.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:38:09 compute-0 nova_compute[249229]: 2026-01-23 10:38:09.014 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:09.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:09 compute-0 sudo[287698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:38:09 compute-0 sudo[287698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:38:09 compute-0 sudo[287698]: pam_unix(sudo:session): session closed for user root
Jan 23 10:38:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:38:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:38:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:38:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:10.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:38:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:11.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:11 compute-0 ceph-mon[74335]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:11 compute-0 nova_compute[249229]: 2026-01-23 10:38:11.806 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:12.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:12 compute-0 ceph-mon[74335]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:13.784Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:14 compute-0 nova_compute[249229]: 2026-01-23 10:38:14.017 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:38:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:14.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:38:14 compute-0 ceph-mon[74335]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:15.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:15 compute-0 podman[287729]: 2026-01-23 10:38:15.544901343 +0000 UTC m=+0.073373783 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 10:38:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:15 compute-0 nova_compute[249229]: 2026-01-23 10:38:15.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:38:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:16.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:16 compute-0 nova_compute[249229]: 2026-01-23 10:38:16.807 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:38:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:38:17 compute-0 ceph-mon[74335]: pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:17.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:18.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:18 compute-0 nova_compute[249229]: 2026-01-23 10:38:18.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:38:18 compute-0 nova_compute[249229]: 2026-01-23 10:38:18.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:38:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:18.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:19 compute-0 nova_compute[249229]: 2026-01-23 10:38:19.018 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:19.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:19 compute-0 ceph-mon[74335]: pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:38:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:38:20
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', '.nfs', 'backups', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:38:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:38:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:38:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:20.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:38:20 compute-0 nova_compute[249229]: 2026-01-23 10:38:20.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 nova_compute[249229]: 2026-01-23 10:38:20.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:38:20 compute-0 nova_compute[249229]: 2026-01-23 10:38:20.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:38:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:38:20 compute-0 nova_compute[249229]: 2026-01-23 10:38:20.729 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:38:21 compute-0 ceph-mon[74335]: pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:38:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:21.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:21 compute-0 nova_compute[249229]: 2026-01-23 10:38:21.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:38:21 compute-0 nova_compute[249229]: 2026-01-23 10:38:21.811 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:22 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2310328428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:38:22 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/387035444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:38:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:22.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:22 compute-0 podman[287765]: 2026-01-23 10:38:22.518187268 +0000 UTC m=+0.046737440 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 10:38:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:23.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:23 compute-0 ceph-mon[74335]: pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:23 compute-0 nova_compute[249229]: 2026-01-23 10:38:23.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:38:23 compute-0 nova_compute[249229]: 2026-01-23 10:38:23.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:38:23 compute-0 nova_compute[249229]: 2026-01-23 10:38:23.741 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:38:23 compute-0 nova_compute[249229]: 2026-01-23 10:38:23.741 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:38:23 compute-0 nova_compute[249229]: 2026-01-23 10:38:23.741 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:38:23 compute-0 nova_compute[249229]: 2026-01-23 10:38:23.742 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:38:23 compute-0 nova_compute[249229]: 2026-01-23 10:38:23.742 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:38:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:23.786Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:24 compute-0 nova_compute[249229]: 2026-01-23 10:38:24.020 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:38:24 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620960525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:38:24 compute-0 nova_compute[249229]: 2026-01-23 10:38:24.264 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:38:24 compute-0 nova_compute[249229]: 2026-01-23 10:38:24.428 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:38:24 compute-0 nova_compute[249229]: 2026-01-23 10:38:24.429 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4539MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:38:24 compute-0 nova_compute[249229]: 2026-01-23 10:38:24.429 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:38:24 compute-0 nova_compute[249229]: 2026-01-23 10:38:24.430 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:38:24 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3620960525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:38:25 compute-0 nova_compute[249229]: 2026-01-23 10:38:25.002 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:38:25 compute-0 nova_compute[249229]: 2026-01-23 10:38:25.003 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:38:25 compute-0 nova_compute[249229]: 2026-01-23 10:38:25.017 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:38:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:25.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:38:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542188124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:38:25 compute-0 nova_compute[249229]: 2026-01-23 10:38:25.459 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:38:25 compute-0 nova_compute[249229]: 2026-01-23 10:38:25.465 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:38:25 compute-0 nova_compute[249229]: 2026-01-23 10:38:25.480 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:38:25 compute-0 nova_compute[249229]: 2026-01-23 10:38:25.482 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:38:25 compute-0 nova_compute[249229]: 2026-01-23 10:38:25.482 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:38:25 compute-0 ceph-mon[74335]: pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:25 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/15073574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:38:25 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/542188124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:38:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:38:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:26.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:38:26 compute-0 ceph-mon[74335]: pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3162061129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:38:26 compute-0 nova_compute[249229]: 2026-01-23 10:38:26.814 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:38:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:27.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:38:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:27.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:38:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:27.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:38:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:27.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:28.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:28 compute-0 ceph-mon[74335]: pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:28.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:29 compute-0 nova_compute[249229]: 2026-01-23 10:38:29.024 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:29.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:29 compute-0 nova_compute[249229]: 2026-01-23 10:38:29.473 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:38:29 compute-0 nova_compute[249229]: 2026-01-23 10:38:29.474 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:38:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:29 compute-0 sudo[287836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:38:29 compute-0 sudo[287836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:38:29 compute-0 sudo[287836]: pam_unix(sudo:session): session closed for user root
Jan 23 10:38:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:29] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:38:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:29] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:38:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:30.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:30 compute-0 nova_compute[249229]: 2026-01-23 10:38:30.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:38:31 compute-0 ceph-mon[74335]: pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.009000258s ======
Jan 23 10:38:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:31.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.009000258s
Jan 23 10:38:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:31 compute-0 nova_compute[249229]: 2026-01-23 10:38:31.818 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:32.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:33.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:33 compute-0 ceph-mon[74335]: pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:33.787Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:38:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:33.787Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:38:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:33.787Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:38:34 compute-0 nova_compute[249229]: 2026-01-23 10:38:34.026 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:34.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:38:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:38:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:35.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:35 compute-0 ceph-mon[74335]: pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:38:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:36.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:36 compute-0 nova_compute[249229]: 2026-01-23 10:38:36.821 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-crash-compute-0[79594]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 10:38:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:37 compute-0 ceph-mon[74335]: pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:37.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:38:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:38.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:38 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:38 compute-0 ceph-mon[74335]: pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:38.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:39 compute-0 nova_compute[249229]: 2026-01-23 10:38:39.029 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:39.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:39] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:38:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:39] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:38:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:40.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:40 compute-0 ceph-mon[74335]: pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:41.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:41 compute-0 nova_compute[249229]: 2026-01-23 10:38:41.824 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:42.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:43 compute-0 ceph-mon[74335]: pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:43.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:43 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:43.788Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:38:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:43.788Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:38:44 compute-0 nova_compute[249229]: 2026-01-23 10:38:44.031 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:44.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:45 compute-0 ceph-mon[74335]: pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:46.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:46 compute-0 podman[287877]: 2026-01-23 10:38:46.575943769 +0000 UTC m=+0.096366312 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Jan 23 10:38:46 compute-0 nova_compute[249229]: 2026-01-23 10:38:46.825 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:47 compute-0 ceph-mon[74335]: pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:47.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:48.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:38:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411057269' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:38:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:38:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411057269' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:38:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:48.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:49 compute-0 nova_compute[249229]: 2026-01-23 10:38:49.032 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:49 compute-0 ceph-mon[74335]: pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3411057269' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:38:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/3411057269' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:38:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:49] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:38:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:49] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:38:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:38:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:38:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:38:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:38:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:38:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:38:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:38:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:38:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:50.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:38:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:51.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:51 compute-0 sudo[287908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:38:51 compute-0 sudo[287908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:38:51 compute-0 sudo[287908]: pam_unix(sudo:session): session closed for user root
Jan 23 10:38:51 compute-0 ceph-mon[74335]: pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:51 compute-0 nova_compute[249229]: 2026-01-23 10:38:51.874 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:53 compute-0 podman[287935]: 2026-01-23 10:38:53.257098796 +0000 UTC m=+0.041494820 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 10:38:53 compute-0 ceph-mon[74335]: pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:38:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:53.790Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:38:54 compute-0 nova_compute[249229]: 2026-01-23 10:38:54.073 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:38:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:54.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:38:54 compute-0 ceph-mon[74335]: pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:55.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:56 compute-0 sudo[287958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:38:56 compute-0 sudo[287958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:38:56 compute-0 sudo[287958]: pam_unix(sudo:session): session closed for user root
Jan 23 10:38:56 compute-0 sudo[287983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:38:56 compute-0 sudo[287983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:38:56 compute-0 ceph-mon[74335]: pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:38:56 compute-0 nova_compute[249229]: 2026-01-23 10:38:56.877 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:57 compute-0 sudo[287983]: pam_unix(sudo:session): session closed for user root
Jan 23 10:38:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:38:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:38:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:38:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:38:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:38:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 793 B/s rd, 0 op/s
Jan 23 10:38:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:38:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:38:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:38:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:57.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:38:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:38:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:38:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:38:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:38:57 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:38:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:38:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:38:57 compute-0 sudo[288042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:38:57 compute-0 sudo[288042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:38:57 compute-0 sudo[288042]: pam_unix(sudo:session): session closed for user root
Jan 23 10:38:57 compute-0 sudo[288067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:38:57 compute-0 sudo[288067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:38:57 compute-0 podman[288133]: 2026-01-23 10:38:57.699331381 +0000 UTC m=+0.043748925 container create 922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:38:57 compute-0 systemd[1]: Started libpod-conmon-922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88.scope.
Jan 23 10:38:57 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:38:57 compute-0 podman[288133]: 2026-01-23 10:38:57.682541319 +0000 UTC m=+0.026958873 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:38:57 compute-0 podman[288133]: 2026-01-23 10:38:57.785008865 +0000 UTC m=+0.129426429 container init 922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 23 10:38:57 compute-0 podman[288133]: 2026-01-23 10:38:57.793849258 +0000 UTC m=+0.138266802 container start 922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:38:57 compute-0 podman[288133]: 2026-01-23 10:38:57.796820163 +0000 UTC m=+0.141237737 container attach 922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:38:57 compute-0 lucid_lehmann[288149]: 167 167
Jan 23 10:38:57 compute-0 systemd[1]: libpod-922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88.scope: Deactivated successfully.
Jan 23 10:38:57 compute-0 conmon[288149]: conmon 922716733dc66a07b4a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88.scope/container/memory.events
Jan 23 10:38:57 compute-0 podman[288133]: 2026-01-23 10:38:57.800337364 +0000 UTC m=+0.144754908 container died 922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 23 10:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-99f2f5513319d489f465734853dc1143bc059b1f737b779950c8d4d7ea08855d-merged.mount: Deactivated successfully.
Jan 23 10:38:57 compute-0 podman[288133]: 2026-01-23 10:38:57.839015542 +0000 UTC m=+0.183433086 container remove 922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 23 10:38:57 compute-0 systemd[1]: libpod-conmon-922716733dc66a07b4a431b47ed7782acbf4a5ccc008fb452a3b741b2627ac88.scope: Deactivated successfully.
Jan 23 10:38:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:57.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:57 compute-0 podman[288176]: 2026-01-23 10:38:57.996023029 +0000 UTC m=+0.046391470 container create 05d8afecfa41ad62a6855954218786decc4d9b5f5ac268271fe4c5b52d23fe7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 23 10:38:58 compute-0 systemd[1]: Started libpod-conmon-05d8afecfa41ad62a6855954218786decc4d9b5f5ac268271fe4c5b52d23fe7f.scope.
Jan 23 10:38:58 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:38:58 compute-0 podman[288176]: 2026-01-23 10:38:57.97475975 +0000 UTC m=+0.025128221 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd401e05b7807de1dcd1797889161c15c694d5223ecf3e9f9603dc2df54c865e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd401e05b7807de1dcd1797889161c15c694d5223ecf3e9f9603dc2df54c865e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd401e05b7807de1dcd1797889161c15c694d5223ecf3e9f9603dc2df54c865e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd401e05b7807de1dcd1797889161c15c694d5223ecf3e9f9603dc2df54c865e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd401e05b7807de1dcd1797889161c15c694d5223ecf3e9f9603dc2df54c865e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:38:58 compute-0 podman[288176]: 2026-01-23 10:38:58.145386138 +0000 UTC m=+0.195754599 container init 05d8afecfa41ad62a6855954218786decc4d9b5f5ac268271fe4c5b52d23fe7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:38:58 compute-0 podman[288176]: 2026-01-23 10:38:58.152609825 +0000 UTC m=+0.202978266 container start 05d8afecfa41ad62a6855954218786decc4d9b5f5ac268271fe4c5b52d23fe7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:38:58 compute-0 podman[288176]: 2026-01-23 10:38:58.155951491 +0000 UTC m=+0.206319952 container attach 05d8afecfa41ad62a6855954218786decc4d9b5f5ac268271fe4c5b52d23fe7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:38:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:58.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:58 compute-0 keen_dirac[288192]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:38:58 compute-0 keen_dirac[288192]: --> All data devices are unavailable
Jan 23 10:38:58 compute-0 systemd[1]: libpod-05d8afecfa41ad62a6855954218786decc4d9b5f5ac268271fe4c5b52d23fe7f.scope: Deactivated successfully.
Jan 23 10:38:58 compute-0 podman[288176]: 2026-01-23 10:38:58.493270185 +0000 UTC m=+0.543638656 container died 05d8afecfa41ad62a6855954218786decc4d9b5f5ac268271fe4c5b52d23fe7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 10:38:58 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:38:58 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:38:58 compute-0 ceph-mon[74335]: pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 793 B/s rd, 0 op/s
Jan 23 10:38:58 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:38:58 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:38:58 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:38:58 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:38:58 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:38:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd401e05b7807de1dcd1797889161c15c694d5223ecf3e9f9603dc2df54c865e-merged.mount: Deactivated successfully.
Jan 23 10:38:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:58.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:38:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:38:58.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:38:59 compute-0 podman[288176]: 2026-01-23 10:38:59.015708592 +0000 UTC m=+1.066077043 container remove 05d8afecfa41ad62a6855954218786decc4d9b5f5ac268271fe4c5b52d23fe7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 23 10:38:59 compute-0 sudo[288067]: pam_unix(sudo:session): session closed for user root
Jan 23 10:38:59 compute-0 systemd[1]: libpod-conmon-05d8afecfa41ad62a6855954218786decc4d9b5f5ac268271fe4c5b52d23fe7f.scope: Deactivated successfully.
Jan 23 10:38:59 compute-0 nova_compute[249229]: 2026-01-23 10:38:59.075 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:38:59 compute-0 sudo[288220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:38:59 compute-0 sudo[288220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:38:59 compute-0 sudo[288220]: pam_unix(sudo:session): session closed for user root
Jan 23 10:38:59 compute-0 sudo[288245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:38:59 compute-0 sudo[288245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:38:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 529 B/s rd, 0 op/s
Jan 23 10:38:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:38:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:38:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:59.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:38:59 compute-0 podman[288313]: 2026-01-23 10:38:59.52980396 +0000 UTC m=+0.041742836 container create c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_galileo, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Jan 23 10:38:59 compute-0 systemd[1]: Started libpod-conmon-c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8.scope.
Jan 23 10:38:59 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:38:59 compute-0 podman[288313]: 2026-01-23 10:38:59.511446764 +0000 UTC m=+0.023385630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:38:59 compute-0 podman[288313]: 2026-01-23 10:38:59.610147622 +0000 UTC m=+0.122086478 container init c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_galileo, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:38:59 compute-0 podman[288313]: 2026-01-23 10:38:59.616945637 +0000 UTC m=+0.128884473 container start c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 23 10:38:59 compute-0 ceph-mon[74335]: pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 529 B/s rd, 0 op/s
Jan 23 10:38:59 compute-0 fervent_galileo[288329]: 167 167
Jan 23 10:38:59 compute-0 systemd[1]: libpod-c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8.scope: Deactivated successfully.
Jan 23 10:38:59 compute-0 conmon[288329]: conmon c6e9b06a71e0516b05c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8.scope/container/memory.events
Jan 23 10:38:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:38:59.795 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:38:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:38:59.796 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:38:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:38:59.796 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:38:59 compute-0 podman[288313]: 2026-01-23 10:38:59.937884031 +0000 UTC m=+0.449822887 container attach c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_galileo, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 23 10:38:59 compute-0 podman[288313]: 2026-01-23 10:38:59.939277131 +0000 UTC m=+0.451215997 container died c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_galileo, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 10:38:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:59] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:38:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:38:59] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:39:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.003000086s ======
Jan 23 10:39:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:00.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000086s
Jan 23 10:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e7c7b5fe4aee9d060d8d929dc65d50975f686b5cac9aeb62dde78e435b23372-merged.mount: Deactivated successfully.
Jan 23 10:39:00 compute-0 podman[288313]: 2026-01-23 10:39:00.383424046 +0000 UTC m=+0.895362882 container remove c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 10:39:00 compute-0 systemd[1]: libpod-conmon-c6e9b06a71e0516b05c2f9f65dbad8201af7c49dec77aa7e5ca6e46cb206faa8.scope: Deactivated successfully.
Jan 23 10:39:00 compute-0 podman[288357]: 2026-01-23 10:39:00.539885398 +0000 UTC m=+0.022821065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:39:00 compute-0 podman[288357]: 2026-01-23 10:39:00.787112931 +0000 UTC m=+0.270048558 container create 05d8245931eafda652133bceef3c4f46c6447faaf5b2c758ad6984687438eee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_galois, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:39:00 compute-0 systemd[1]: Started libpod-conmon-05d8245931eafda652133bceef3c4f46c6447faaf5b2c758ad6984687438eee6.scope.
Jan 23 10:39:00 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096bbe3e2759e34cdbc2da3708d5fcf663f0c8f5ae3aefc6dfe242f39e382f84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096bbe3e2759e34cdbc2da3708d5fcf663f0c8f5ae3aefc6dfe242f39e382f84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096bbe3e2759e34cdbc2da3708d5fcf663f0c8f5ae3aefc6dfe242f39e382f84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096bbe3e2759e34cdbc2da3708d5fcf663f0c8f5ae3aefc6dfe242f39e382f84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:39:00 compute-0 podman[288357]: 2026-01-23 10:39:00.973162771 +0000 UTC m=+0.456098408 container init 05d8245931eafda652133bceef3c4f46c6447faaf5b2c758ad6984687438eee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 23 10:39:00 compute-0 podman[288357]: 2026-01-23 10:39:00.979445341 +0000 UTC m=+0.462380968 container start 05d8245931eafda652133bceef3c4f46c6447faaf5b2c758ad6984687438eee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_galois, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 23 10:39:00 compute-0 podman[288357]: 2026-01-23 10:39:00.993152904 +0000 UTC m=+0.476088531 container attach 05d8245931eafda652133bceef3c4f46c6447faaf5b2c758ad6984687438eee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 23 10:39:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 793 B/s rd, 0 op/s
Jan 23 10:39:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:01.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:01 compute-0 dazzling_galois[288375]: {
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:     "1": [
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:         {
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "devices": [
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "/dev/loop3"
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             ],
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "lv_name": "ceph_lv0",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "lv_size": "21470642176",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "name": "ceph_lv0",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "tags": {
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.cluster_name": "ceph",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.crush_device_class": "",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.encrypted": "0",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.osd_id": "1",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.type": "block",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.vdo": "0",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:                 "ceph.with_tpm": "0"
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             },
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "type": "block",
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:             "vg_name": "ceph_vg0"
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:         }
Jan 23 10:39:01 compute-0 dazzling_galois[288375]:     ]
Jan 23 10:39:01 compute-0 dazzling_galois[288375]: }
Jan 23 10:39:01 compute-0 systemd[1]: libpod-05d8245931eafda652133bceef3c4f46c6447faaf5b2c758ad6984687438eee6.scope: Deactivated successfully.
Jan 23 10:39:01 compute-0 podman[288357]: 2026-01-23 10:39:01.259262918 +0000 UTC m=+0.742198545 container died 05d8245931eafda652133bceef3c4f46c6447faaf5b2c758ad6984687438eee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:39:01 compute-0 ceph-mon[74335]: pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 793 B/s rd, 0 op/s
Jan 23 10:39:01 compute-0 nova_compute[249229]: 2026-01-23 10:39:01.880 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-096bbe3e2759e34cdbc2da3708d5fcf663f0c8f5ae3aefc6dfe242f39e382f84-merged.mount: Deactivated successfully.
Jan 23 10:39:02 compute-0 podman[288357]: 2026-01-23 10:39:02.052517283 +0000 UTC m=+1.535452910 container remove 05d8245931eafda652133bceef3c4f46c6447faaf5b2c758ad6984687438eee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:39:02 compute-0 sudo[288245]: pam_unix(sudo:session): session closed for user root
Jan 23 10:39:02 compute-0 systemd[1]: libpod-conmon-05d8245931eafda652133bceef3c4f46c6447faaf5b2c758ad6984687438eee6.scope: Deactivated successfully.
Jan 23 10:39:02 compute-0 sudo[288397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:39:02 compute-0 sudo[288397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:39:02 compute-0 sudo[288397]: pam_unix(sudo:session): session closed for user root
Jan 23 10:39:02 compute-0 sudo[288422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:39:02 compute-0 sudo[288422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:39:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:02 compute-0 podman[288488]: 2026-01-23 10:39:02.624558571 +0000 UTC m=+0.039659097 container create 11aa21a559ff699dffb0b881e84eb7c9c3b7ad4453c0e7fdc858f6e9bf360dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_blackwell, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:39:02 compute-0 systemd[1]: Started libpod-conmon-11aa21a559ff699dffb0b881e84eb7c9c3b7ad4453c0e7fdc858f6e9bf360dcf.scope.
Jan 23 10:39:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:39:02 compute-0 podman[288488]: 2026-01-23 10:39:02.697680596 +0000 UTC m=+0.112781132 container init 11aa21a559ff699dffb0b881e84eb7c9c3b7ad4453c0e7fdc858f6e9bf360dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_blackwell, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 23 10:39:02 compute-0 podman[288488]: 2026-01-23 10:39:02.606920726 +0000 UTC m=+0.022021272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:39:02 compute-0 podman[288488]: 2026-01-23 10:39:02.703545334 +0000 UTC m=+0.118645860 container start 11aa21a559ff699dffb0b881e84eb7c9c3b7ad4453c0e7fdc858f6e9bf360dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:39:02 compute-0 podman[288488]: 2026-01-23 10:39:02.706227511 +0000 UTC m=+0.121328037 container attach 11aa21a559ff699dffb0b881e84eb7c9c3b7ad4453c0e7fdc858f6e9bf360dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_blackwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:39:02 compute-0 stoic_blackwell[288505]: 167 167
Jan 23 10:39:02 compute-0 systemd[1]: libpod-11aa21a559ff699dffb0b881e84eb7c9c3b7ad4453c0e7fdc858f6e9bf360dcf.scope: Deactivated successfully.
Jan 23 10:39:02 compute-0 podman[288488]: 2026-01-23 10:39:02.708163736 +0000 UTC m=+0.123264272 container died 11aa21a559ff699dffb0b881e84eb7c9c3b7ad4453c0e7fdc858f6e9bf360dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Jan 23 10:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5e9c44a937af4dad091b1d1d4a24ad47128562aa00a3a9f07b3cb733f9df1da-merged.mount: Deactivated successfully.
Jan 23 10:39:02 compute-0 podman[288488]: 2026-01-23 10:39:02.744804576 +0000 UTC m=+0.159905102 container remove 11aa21a559ff699dffb0b881e84eb7c9c3b7ad4453c0e7fdc858f6e9bf360dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 10:39:02 compute-0 systemd[1]: libpod-conmon-11aa21a559ff699dffb0b881e84eb7c9c3b7ad4453c0e7fdc858f6e9bf360dcf.scope: Deactivated successfully.
Jan 23 10:39:02 compute-0 podman[288530]: 2026-01-23 10:39:02.897936443 +0000 UTC m=+0.038807143 container create c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_newton, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:39:02 compute-0 systemd[1]: Started libpod-conmon-c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a.scope.
Jan 23 10:39:02 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a468ba5e6d519b486d4b4a03df04e2b6897f9506cbb5e918b08c1f0902c357b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a468ba5e6d519b486d4b4a03df04e2b6897f9506cbb5e918b08c1f0902c357b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a468ba5e6d519b486d4b4a03df04e2b6897f9506cbb5e918b08c1f0902c357b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a468ba5e6d519b486d4b4a03df04e2b6897f9506cbb5e918b08c1f0902c357b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:39:02 compute-0 podman[288530]: 2026-01-23 10:39:02.881545863 +0000 UTC m=+0.022416593 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:39:03 compute-0 podman[288530]: 2026-01-23 10:39:03.166726703 +0000 UTC m=+0.307597423 container init c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 23 10:39:03 compute-0 podman[288530]: 2026-01-23 10:39:03.174325661 +0000 UTC m=+0.315196361 container start c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:39:03 compute-0 podman[288530]: 2026-01-23 10:39:03.178764468 +0000 UTC m=+0.319635188 container attach c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_newton, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 23 10:39:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 529 B/s rd, 0 op/s
Jan 23 10:39:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:03.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:03.791Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:03 compute-0 lvm[288621]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:39:03 compute-0 lvm[288621]: VG ceph_vg0 finished
Jan 23 10:39:03 compute-0 gifted_newton[288547]: {}
Jan 23 10:39:03 compute-0 podman[288530]: 2026-01-23 10:39:03.896895612 +0000 UTC m=+1.037766342 container died c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 10:39:03 compute-0 systemd[1]: libpod-c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a.scope: Deactivated successfully.
Jan 23 10:39:03 compute-0 systemd[1]: libpod-c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a.scope: Consumed 1.084s CPU time.
Jan 23 10:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a468ba5e6d519b486d4b4a03df04e2b6897f9506cbb5e918b08c1f0902c357b4-merged.mount: Deactivated successfully.
Jan 23 10:39:03 compute-0 podman[288530]: 2026-01-23 10:39:03.943678792 +0000 UTC m=+1.084549492 container remove c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:39:03 compute-0 systemd[1]: libpod-conmon-c2a7c7de833b2a4ea8b1440f7a09defb6ec5f05271d7913366162931aca8639a.scope: Deactivated successfully.
Jan 23 10:39:03 compute-0 sudo[288422]: pam_unix(sudo:session): session closed for user root
Jan 23 10:39:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:39:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:39:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:39:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:39:04 compute-0 sudo[288638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:39:04 compute-0 sudo[288638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:39:04 compute-0 sudo[288638]: pam_unix(sudo:session): session closed for user root
Jan 23 10:39:04 compute-0 nova_compute[249229]: 2026-01-23 10:39:04.077 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:04 compute-0 ceph-mon[74335]: pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 529 B/s rd, 0 op/s
Jan 23 10:39:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:39:04 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:39:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:39:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:39:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 529 B/s rd, 0 op/s
Jan 23 10:39:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:05.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:39:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:06.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:06 compute-0 ceph-mon[74335]: pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 529 B/s rd, 0 op/s
Jan 23 10:39:06 compute-0 nova_compute[249229]: 2026-01-23 10:39:06.883 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 793 B/s rd, 0 op/s
Jan 23 10:39:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:07.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:07.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:08.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:08 compute-0 ceph-mon[74335]: pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 793 B/s rd, 0 op/s
Jan 23 10:39:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:08.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:09 compute-0 nova_compute[249229]: 2026-01-23 10:39:09.078 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:09.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:09 compute-0 ceph-mon[74335]: pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:09] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:39:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:09] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:39:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:10.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:39:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:11.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:39:11 compute-0 sudo[288670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:39:11 compute-0 sudo[288670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:39:11 compute-0 sudo[288670]: pam_unix(sudo:session): session closed for user root
Jan 23 10:39:11 compute-0 nova_compute[249229]: 2026-01-23 10:39:11.885 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:12 compute-0 ceph-mon[74335]: pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:12.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:13.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:13.792Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:13 compute-0 ceph-mon[74335]: pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:14 compute-0 nova_compute[249229]: 2026-01-23 10:39:14.081 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:14.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:15.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:15 compute-0 nova_compute[249229]: 2026-01-23 10:39:15.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:16.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:16 compute-0 ceph-mon[74335]: pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:16 compute-0 nova_compute[249229]: 2026-01-23 10:39:16.888 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:17.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:17 compute-0 podman[288701]: 2026-01-23 10:39:17.593622253 +0000 UTC m=+0.117277281 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 10:39:17 compute-0 ceph-mon[74335]: pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:17.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:39:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:18.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:39:18 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:18.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:19 compute-0 nova_compute[249229]: 2026-01-23 10:39:19.083 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:19.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:19] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:39:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:19] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:39:20
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.nfs', 'volumes', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.rgw.root']
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:39:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:39:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:39:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:20.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:20 compute-0 ceph-mon[74335]: pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:20 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:39:20 compute-0 nova_compute[249229]: 2026-01-23 10:39:20.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:20 compute-0 nova_compute[249229]: 2026-01-23 10:39:20.717 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:39:20 compute-0 nova_compute[249229]: 2026-01-23 10:39:20.718 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:39:20 compute-0 nova_compute[249229]: 2026-01-23 10:39:20.733 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:39:20 compute-0 nova_compute[249229]: 2026-01-23 10:39:20.733 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:20 compute-0 nova_compute[249229]: 2026-01-23 10:39:20.733 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:39:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:39:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:21.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:21 compute-0 ceph-mon[74335]: pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:21 compute-0 nova_compute[249229]: 2026-01-23 10:39:21.893 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:22.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:23 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1085583944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:39:23 compute-0 podman[288734]: 2026-01-23 10:39:23.543944032 +0000 UTC m=+0.071508620 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 23 10:39:23 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:23 compute-0 nova_compute[249229]: 2026-01-23 10:39:23.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:23 compute-0 nova_compute[249229]: 2026-01-23 10:39:23.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:23 compute-0 nova_compute[249229]: 2026-01-23 10:39:23.718 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:23.794Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:39:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:23.795Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:39:23 compute-0 nova_compute[249229]: 2026-01-23 10:39:23.815 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:39:23 compute-0 nova_compute[249229]: 2026-01-23 10:39:23.815 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:39:23 compute-0 nova_compute[249229]: 2026-01-23 10:39:23.816 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:39:23 compute-0 nova_compute[249229]: 2026-01-23 10:39:23.816 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:39:23 compute-0 nova_compute[249229]: 2026-01-23 10:39:23.816 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:39:24 compute-0 nova_compute[249229]: 2026-01-23 10:39:24.084 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:24.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:39:24 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040346182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:39:24 compute-0 nova_compute[249229]: 2026-01-23 10:39:24.343 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:39:24 compute-0 ceph-mon[74335]: pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:24 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2536873901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:39:24 compute-0 nova_compute[249229]: 2026-01-23 10:39:24.501 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:39:24 compute-0 nova_compute[249229]: 2026-01-23 10:39:24.502 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4534MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:39:24 compute-0 nova_compute[249229]: 2026-01-23 10:39:24.502 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:39:24 compute-0 nova_compute[249229]: 2026-01-23 10:39:24.502 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:39:24 compute-0 nova_compute[249229]: 2026-01-23 10:39:24.569 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:39:24 compute-0 nova_compute[249229]: 2026-01-23 10:39:24.569 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:39:24 compute-0 nova_compute[249229]: 2026-01-23 10:39:24.586 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:39:25 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:39:25 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515020931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:39:25 compute-0 nova_compute[249229]: 2026-01-23 10:39:25.079 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:39:25 compute-0 nova_compute[249229]: 2026-01-23 10:39:25.084 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:39:25 compute-0 nova_compute[249229]: 2026-01-23 10:39:25.110 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:39:25 compute-0 nova_compute[249229]: 2026-01-23 10:39:25.112 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:39:25 compute-0 nova_compute[249229]: 2026-01-23 10:39:25.112 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:39:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:25.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:26.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1040346182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:39:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3838063102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:39:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/712866539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:39:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3515020931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:39:26 compute-0 nova_compute[249229]: 2026-01-23 10:39:26.895 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:27.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:27.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:39:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:28.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:28 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:28 compute-0 ceph-mon[74335]: pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:28 compute-0 ceph-mon[74335]: pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:28.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:39:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:28.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:39:29 compute-0 nova_compute[249229]: 2026-01-23 10:39:29.088 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:29] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:39:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:29] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 23 10:39:30 compute-0 nova_compute[249229]: 2026-01-23 10:39:30.103 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:30 compute-0 nova_compute[249229]: 2026-01-23 10:39:30.104 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:30.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:30 compute-0 ceph-mon[74335]: pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:30 compute-0 nova_compute[249229]: 2026-01-23 10:39:30.912 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:30 compute-0 nova_compute[249229]: 2026-01-23 10:39:30.913 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:39:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:31.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:31 compute-0 sudo[288803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:39:31 compute-0 sudo[288803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:39:31 compute-0 sudo[288803]: pam_unix(sudo:session): session closed for user root
Jan 23 10:39:31 compute-0 nova_compute[249229]: 2026-01-23 10:39:31.898 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:32.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:32 compute-0 ceph-mon[74335]: pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:33.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:33.795Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:33 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:33 compute-0 ceph-mon[74335]: pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:34 compute-0 nova_compute[249229]: 2026-01-23 10:39:34.088 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:34.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:39:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:39:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:39:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:35.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:39:35 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:39:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:36 compute-0 nova_compute[249229]: 2026-01-23 10:39:36.902 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:36 compute-0 ceph-mon[74335]: pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:37.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:37 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 23 10:39:37 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:37.780170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:39:37 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 23 10:39:37 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164777780309, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1155, "num_deletes": 251, "total_data_size": 2166522, "memory_usage": 2208488, "flush_reason": "Manual Compaction"}
Jan 23 10:39:37 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 23 10:39:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:37.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164778134710, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 2102623, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38443, "largest_seqno": 39597, "table_properties": {"data_size": 2096978, "index_size": 3039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11955, "raw_average_key_size": 19, "raw_value_size": 2085740, "raw_average_value_size": 3482, "num_data_blocks": 131, "num_entries": 599, "num_filter_entries": 599, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164669, "oldest_key_time": 1769164669, "file_creation_time": 1769164777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 354579 microseconds, and 5833 cpu microseconds.
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:39:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:38.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:38.134759) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 2102623 bytes OK
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:38.134779) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:38.433562) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:38.433620) EVENT_LOG_v1 {"time_micros": 1769164778433608, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:38.433650) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2161354, prev total WAL file size 2162591, number of live WAL files 2.
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:38.434810) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(2053KB)], [83(14MB)]
Jan 23 10:39:38 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164778434950, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 17263275, "oldest_snapshot_seqno": -1}
Jan 23 10:39:38 compute-0 ceph-mon[74335]: pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:38.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:39 compute-0 nova_compute[249229]: 2026-01-23 10:39:39.092 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:39:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:39.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7022 keys, 14949297 bytes, temperature: kUnknown
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164779470863, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 14949297, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14902714, "index_size": 27911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 185159, "raw_average_key_size": 26, "raw_value_size": 14776447, "raw_average_value_size": 2104, "num_data_blocks": 1086, "num_entries": 7022, "num_filter_entries": 7022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769164778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:39.471231) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 14949297 bytes
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:39.838394) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 16.7 rd, 14.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 14.5 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(15.3) write-amplify(7.1) OK, records in: 7538, records dropped: 516 output_compression: NoCompression
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:39.838440) EVENT_LOG_v1 {"time_micros": 1769164779838422, "job": 48, "event": "compaction_finished", "compaction_time_micros": 1035966, "compaction_time_cpu_micros": 31810, "output_level": 6, "num_output_files": 1, "total_output_size": 14949297, "num_input_records": 7538, "num_output_records": 7022, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164779839003, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164779841582, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:38.434689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:39.841669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:39.841675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:39.841677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:39.841679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:39:39 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:39:39.841681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:39:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:39] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:39:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:39] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:39:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:40.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:40 compute-0 ceph-mon[74335]: pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:41.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:39:41 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3181 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 435 writes, 682 keys, 435 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 435 writes, 206 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 10:39:41 compute-0 nova_compute[249229]: 2026-01-23 10:39:41.904 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:42.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:42 compute-0 ceph-mon[74335]: pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:43.797Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:44 compute-0 nova_compute[249229]: 2026-01-23 10:39:44.095 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:44.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:44 compute-0 ceph-mon[74335]: pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:45.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:46.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:46 compute-0 nova_compute[249229]: 2026-01-23 10:39:46.907 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:47.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:47 compute-0 ceph-mon[74335]: pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:47.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:48.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:48 compute-0 podman[288845]: 2026-01-23 10:39:48.546888342 +0000 UTC m=+0.076163073 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 10:39:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:39:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/786440771' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:39:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:39:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/786440771' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:39:48 compute-0 ceph-mon[74335]: pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:48.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:49 compute-0 nova_compute[249229]: 2026-01-23 10:39:49.096 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:49.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:49] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:39:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:49] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:39:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/786440771' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:39:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/786440771' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:39:49 compute-0 ceph-mon[74335]: pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:39:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:39:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:39:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:39:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:39:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:39:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:39:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:39:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:51.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:51 compute-0 sudo[288874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:39:51 compute-0 sudo[288874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:39:51 compute-0 sudo[288874]: pam_unix(sudo:session): session closed for user root
Jan 23 10:39:51 compute-0 nova_compute[249229]: 2026-01-23 10:39:51.910 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:52.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:53 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:39:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:53.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:53.797Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:54 compute-0 nova_compute[249229]: 2026-01-23 10:39:54.098 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:54 compute-0 ceph-mon[74335]: pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:54 compute-0 ceph-mon[74335]: pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:54.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:54 compute-0 podman[288902]: 2026-01-23 10:39:54.512245193 +0000 UTC m=+0.046668558 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 10:39:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:39:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:39:55 compute-0 ceph-mon[74335]: pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:56.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:56 compute-0 nova_compute[249229]: 2026-01-23 10:39:56.912 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:57 compute-0 ceph-mon[74335]: pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:39:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:57.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:39:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:58.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:39:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:39:58.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:39:59 compute-0 nova_compute[249229]: 2026-01-23 10:39:59.101 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:39:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:39:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:39:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:39:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:59.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:39:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:39:59.797 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:39:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:39:59.797 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:39:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:39:59.797 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:39:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:39:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:59] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:39:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:39:59] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Jan 23 10:40:00 compute-0 ceph-mon[74335]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 2 failed cephadm daemon(s)
Jan 23 10:40:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:00 compute-0 ceph-mon[74335]: pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:00 compute-0 ceph-mon[74335]: overall HEALTH_WARN 2 OSD(s) experiencing slow operations in BlueStore; 2 failed cephadm daemon(s)
Jan 23 10:40:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:01 compute-0 nova_compute[249229]: 2026-01-23 10:40:01.917 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:01 compute-0 ceph-mon[74335]: pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:03.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:03.799Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:40:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:03.799Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:40:04 compute-0 nova_compute[249229]: 2026-01-23 10:40:04.103 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:04 compute-0 ceph-mon[74335]: pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:04 compute-0 sudo[288931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:40:04 compute-0 sudo[288931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:04 compute-0 sudo[288931]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:04.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:04 compute-0 sudo[288956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 23 10:40:04 compute-0 sudo[288956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:04 compute-0 sudo[288956]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:40:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:40:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:04 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:05 compute-0 sudo[289002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:40:05 compute-0 sudo[289002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:05 compute-0 sudo[289002]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:05 compute-0 sudo[289027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:40:05 compute-0 sudo[289027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:40:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:40:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:05.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:05 compute-0 sudo[289027]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:40:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:40:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:40:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:40:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:40:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 0 op/s
Jan 23 10:40:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 23 10:40:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 23 10:40:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:40:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 23 10:40:05 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:40:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:40:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:40:05 compute-0 sudo[289083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:40:05 compute-0 sudo[289083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:05 compute-0 sudo[289083]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:05 compute-0 sudo[289109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 23 10:40:05 compute-0 sudo[289109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:40:06 compute-0 ceph-mon[74335]: pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:40:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:40:06 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:40:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:06.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:40:06 compute-0 podman[289172]: 2026-01-23 10:40:06.414867797 +0000 UTC m=+0.093525171 container create 4a60e0b43e000a5448ce44f75640d2d55c9f9cfc0dbdd5a8ce30f2e70f1512f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 23 10:40:06 compute-0 podman[289172]: 2026-01-23 10:40:06.341788683 +0000 UTC m=+0.020446077 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:40:06 compute-0 systemd[1]: Started libpod-conmon-4a60e0b43e000a5448ce44f75640d2d55c9f9cfc0dbdd5a8ce30f2e70f1512f8.scope.
Jan 23 10:40:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:40:06 compute-0 podman[289172]: 2026-01-23 10:40:06.544726428 +0000 UTC m=+0.223383822 container init 4a60e0b43e000a5448ce44f75640d2d55c9f9cfc0dbdd5a8ce30f2e70f1512f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:40:06 compute-0 podman[289172]: 2026-01-23 10:40:06.554283181 +0000 UTC m=+0.232940555 container start 4a60e0b43e000a5448ce44f75640d2d55c9f9cfc0dbdd5a8ce30f2e70f1512f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Jan 23 10:40:06 compute-0 gifted_bhabha[289189]: 167 167
Jan 23 10:40:06 compute-0 systemd[1]: libpod-4a60e0b43e000a5448ce44f75640d2d55c9f9cfc0dbdd5a8ce30f2e70f1512f8.scope: Deactivated successfully.
Jan 23 10:40:06 compute-0 podman[289172]: 2026-01-23 10:40:06.570422093 +0000 UTC m=+0.249079477 container attach 4a60e0b43e000a5448ce44f75640d2d55c9f9cfc0dbdd5a8ce30f2e70f1512f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 10:40:06 compute-0 podman[289172]: 2026-01-23 10:40:06.570902237 +0000 UTC m=+0.249559611 container died 4a60e0b43e000a5448ce44f75640d2d55c9f9cfc0dbdd5a8ce30f2e70f1512f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:40:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dff7fdae6b9bee93a3fd4baf9788e7f29a02845c006f928342ad96aa3be3a52-merged.mount: Deactivated successfully.
Jan 23 10:40:06 compute-0 podman[289172]: 2026-01-23 10:40:06.645973548 +0000 UTC m=+0.324630922 container remove 4a60e0b43e000a5448ce44f75640d2d55c9f9cfc0dbdd5a8ce30f2e70f1512f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 23 10:40:06 compute-0 systemd[1]: libpod-conmon-4a60e0b43e000a5448ce44f75640d2d55c9f9cfc0dbdd5a8ce30f2e70f1512f8.scope: Deactivated successfully.
Jan 23 10:40:06 compute-0 podman[289215]: 2026-01-23 10:40:06.851881097 +0000 UTC m=+0.070612524 container create bad9fc42943129e608193709c28b70c12a99b73c188bdb00ee6b039cc9515baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:40:06 compute-0 podman[289215]: 2026-01-23 10:40:06.803901782 +0000 UTC m=+0.022633239 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:40:06 compute-0 systemd[1]: Started libpod-conmon-bad9fc42943129e608193709c28b70c12a99b73c188bdb00ee6b039cc9515baa.scope.
Jan 23 10:40:06 compute-0 nova_compute[249229]: 2026-01-23 10:40:06.920 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:06 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f55c2311e181cfec4d37af605e2854c53e93052fefc41400973e53a329b5edd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f55c2311e181cfec4d37af605e2854c53e93052fefc41400973e53a329b5edd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f55c2311e181cfec4d37af605e2854c53e93052fefc41400973e53a329b5edd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f55c2311e181cfec4d37af605e2854c53e93052fefc41400973e53a329b5edd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f55c2311e181cfec4d37af605e2854c53e93052fefc41400973e53a329b5edd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:06 compute-0 podman[289215]: 2026-01-23 10:40:06.987005098 +0000 UTC m=+0.205736545 container init bad9fc42943129e608193709c28b70c12a99b73c188bdb00ee6b039cc9515baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 23 10:40:06 compute-0 podman[289215]: 2026-01-23 10:40:06.994734289 +0000 UTC m=+0.213465716 container start bad9fc42943129e608193709c28b70c12a99b73c188bdb00ee6b039cc9515baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_panini, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:40:06 compute-0 podman[289215]: 2026-01-23 10:40:06.999414403 +0000 UTC m=+0.218145830 container attach bad9fc42943129e608193709c28b70c12a99b73c188bdb00ee6b039cc9515baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_panini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Jan 23 10:40:07 compute-0 ceph-mon[74335]: pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 0 op/s
Jan 23 10:40:07 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:07 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 10:40:07 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 10:40:07 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:40:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:40:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:07.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:40:07 compute-0 flamboyant_panini[289232]: --> passed data devices: 0 physical, 1 LVM
Jan 23 10:40:07 compute-0 flamboyant_panini[289232]: --> All data devices are unavailable
Jan 23 10:40:07 compute-0 systemd[1]: libpod-bad9fc42943129e608193709c28b70c12a99b73c188bdb00ee6b039cc9515baa.scope: Deactivated successfully.
Jan 23 10:40:07 compute-0 podman[289215]: 2026-01-23 10:40:07.361892748 +0000 UTC m=+0.580624185 container died bad9fc42943129e608193709c28b70c12a99b73c188bdb00ee6b039cc9515baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_panini, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 10:40:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f55c2311e181cfec4d37af605e2854c53e93052fefc41400973e53a329b5edd-merged.mount: Deactivated successfully.
Jan 23 10:40:07 compute-0 podman[289215]: 2026-01-23 10:40:07.400828524 +0000 UTC m=+0.619559951 container remove bad9fc42943129e608193709c28b70c12a99b73c188bdb00ee6b039cc9515baa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:40:07 compute-0 systemd[1]: libpod-conmon-bad9fc42943129e608193709c28b70c12a99b73c188bdb00ee6b039cc9515baa.scope: Deactivated successfully.
Jan 23 10:40:07 compute-0 sudo[289109]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:07 compute-0 sudo[289259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:40:07 compute-0 sudo[289259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:07 compute-0 sudo[289259]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:07 compute-0 sudo[289284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- lvm list --format json
Jan 23 10:40:07 compute-0 sudo[289284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:07.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:40:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:07.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 23 10:40:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:07.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:07 compute-0 podman[289347]: 2026-01-23 10:40:07.932635959 +0000 UTC m=+0.037435833 container create cb4da3d5046aa768a93c6db41c54d390c14f62a4a69531d9241c0c250d930e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:40:07 compute-0 systemd[1]: Started libpod-conmon-cb4da3d5046aa768a93c6db41c54d390c14f62a4a69531d9241c0c250d930e8c.scope.
Jan 23 10:40:07 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:40:08 compute-0 podman[289347]: 2026-01-23 10:40:08.002605044 +0000 UTC m=+0.107404938 container init cb4da3d5046aa768a93c6db41c54d390c14f62a4a69531d9241c0c250d930e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 23 10:40:08 compute-0 podman[289347]: 2026-01-23 10:40:08.009152251 +0000 UTC m=+0.113952125 container start cb4da3d5046aa768a93c6db41c54d390c14f62a4a69531d9241c0c250d930e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:40:08 compute-0 podman[289347]: 2026-01-23 10:40:07.917809044 +0000 UTC m=+0.022608938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:40:08 compute-0 podman[289347]: 2026-01-23 10:40:08.01259848 +0000 UTC m=+0.117398374 container attach cb4da3d5046aa768a93c6db41c54d390c14f62a4a69531d9241c0c250d930e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_williams, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 23 10:40:08 compute-0 wonderful_williams[289364]: 167 167
Jan 23 10:40:08 compute-0 systemd[1]: libpod-cb4da3d5046aa768a93c6db41c54d390c14f62a4a69531d9241c0c250d930e8c.scope: Deactivated successfully.
Jan 23 10:40:08 compute-0 podman[289347]: 2026-01-23 10:40:08.014407282 +0000 UTC m=+0.119207166 container died cb4da3d5046aa768a93c6db41c54d390c14f62a4a69531d9241c0c250d930e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 10:40:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca8b2c988fc058940e741063ba85659746b26c9428afc37ac9e4b281b3649fa4-merged.mount: Deactivated successfully.
Jan 23 10:40:08 compute-0 podman[289347]: 2026-01-23 10:40:08.045927025 +0000 UTC m=+0.150726899 container remove cb4da3d5046aa768a93c6db41c54d390c14f62a4a69531d9241c0c250d930e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_williams, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:40:08 compute-0 systemd[1]: libpod-conmon-cb4da3d5046aa768a93c6db41c54d390c14f62a4a69531d9241c0c250d930e8c.scope: Deactivated successfully.
Jan 23 10:40:08 compute-0 podman[289389]: 2026-01-23 10:40:08.194292245 +0000 UTC m=+0.039724199 container create 9754e9117b8cb3f158702fb722c0321d6b79f3a8d01ffcdb330f0f375b81743c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 23 10:40:08 compute-0 systemd[1]: Started libpod-conmon-9754e9117b8cb3f158702fb722c0321d6b79f3a8d01ffcdb330f0f375b81743c.scope.
Jan 23 10:40:08 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79e86d97b60f8764055a1202bffd3d7ccb85c71aa2fd573cb5d0fb294fa0d5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79e86d97b60f8764055a1202bffd3d7ccb85c71aa2fd573cb5d0fb294fa0d5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79e86d97b60f8764055a1202bffd3d7ccb85c71aa2fd573cb5d0fb294fa0d5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79e86d97b60f8764055a1202bffd3d7ccb85c71aa2fd573cb5d0fb294fa0d5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:08 compute-0 podman[289389]: 2026-01-23 10:40:08.269792328 +0000 UTC m=+0.115224272 container init 9754e9117b8cb3f158702fb722c0321d6b79f3a8d01ffcdb330f0f375b81743c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_archimedes, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:40:08 compute-0 podman[289389]: 2026-01-23 10:40:08.177622898 +0000 UTC m=+0.023054842 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:40:08 compute-0 podman[289389]: 2026-01-23 10:40:08.278092386 +0000 UTC m=+0.123524310 container start 9754e9117b8cb3f158702fb722c0321d6b79f3a8d01ffcdb330f0f375b81743c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_archimedes, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 10:40:08 compute-0 podman[289389]: 2026-01-23 10:40:08.281649908 +0000 UTC m=+0.127081832 container attach 9754e9117b8cb3f158702fb722c0321d6b79f3a8d01ffcdb330f0f375b81743c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_archimedes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 10:40:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:08 compute-0 objective_archimedes[289407]: {
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:     "1": [
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:         {
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "devices": [
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "/dev/loop3"
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             ],
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "lv_name": "ceph_lv0",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "lv_size": "21470642176",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f3005f84-239a-55b6-a948-8f1fb592b920,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e272688e-6b15-4719-9011-a7e7310819a5,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "lv_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "name": "ceph_lv0",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "tags": {
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.block_uuid": "H5XBPE-ipFz-e91k-JQfG-qrZd-SuIG-jS3jpB",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.cephx_lockbox_secret": "",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.cluster_fsid": "f3005f84-239a-55b6-a948-8f1fb592b920",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.cluster_name": "ceph",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.crush_device_class": "",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.encrypted": "0",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.osd_fsid": "e272688e-6b15-4719-9011-a7e7310819a5",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.osd_id": "1",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.type": "block",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.vdo": "0",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:                 "ceph.with_tpm": "0"
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             },
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "type": "block",
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:             "vg_name": "ceph_vg0"
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:         }
Jan 23 10:40:08 compute-0 objective_archimedes[289407]:     ]
Jan 23 10:40:08 compute-0 objective_archimedes[289407]: }
Jan 23 10:40:08 compute-0 systemd[1]: libpod-9754e9117b8cb3f158702fb722c0321d6b79f3a8d01ffcdb330f0f375b81743c.scope: Deactivated successfully.
Jan 23 10:40:08 compute-0 podman[289389]: 2026-01-23 10:40:08.575083195 +0000 UTC m=+0.420515139 container died 9754e9117b8cb3f158702fb722c0321d6b79f3a8d01ffcdb330f0f375b81743c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_archimedes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:40:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a79e86d97b60f8764055a1202bffd3d7ccb85c71aa2fd573cb5d0fb294fa0d5f-merged.mount: Deactivated successfully.
Jan 23 10:40:08 compute-0 podman[289389]: 2026-01-23 10:40:08.614224186 +0000 UTC m=+0.459656110 container remove 9754e9117b8cb3f158702fb722c0321d6b79f3a8d01ffcdb330f0f375b81743c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_archimedes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 10:40:08 compute-0 systemd[1]: libpod-conmon-9754e9117b8cb3f158702fb722c0321d6b79f3a8d01ffcdb330f0f375b81743c.scope: Deactivated successfully.
Jan 23 10:40:08 compute-0 ceph-mon[74335]: pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:08 compute-0 sudo[289284]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:08 compute-0 sudo[289430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:40:08 compute-0 sudo[289430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:08 compute-0 sudo[289430]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:08 compute-0 sudo[289455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid f3005f84-239a-55b6-a948-8f1fb592b920 -- raw list --format json
Jan 23 10:40:08 compute-0 sudo[289455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:08 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:08.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:09 compute-0 nova_compute[249229]: 2026-01-23 10:40:09.103 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:09 compute-0 podman[289519]: 2026-01-23 10:40:09.139959098 +0000 UTC m=+0.038744671 container create 6a26e42b7453af4fa3c159b392ab00e11d6b8d95820153b7c9c19fd9913bb73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 10:40:09 compute-0 systemd[1]: Started libpod-conmon-6a26e42b7453af4fa3c159b392ab00e11d6b8d95820153b7c9c19fd9913bb73f.scope.
Jan 23 10:40:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:40:09 compute-0 podman[289519]: 2026-01-23 10:40:09.213658459 +0000 UTC m=+0.112444052 container init 6a26e42b7453af4fa3c159b392ab00e11d6b8d95820153b7c9c19fd9913bb73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 10:40:09 compute-0 podman[289519]: 2026-01-23 10:40:09.123021152 +0000 UTC m=+0.021806745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:40:09 compute-0 podman[289519]: 2026-01-23 10:40:09.220241958 +0000 UTC m=+0.119027531 container start 6a26e42b7453af4fa3c159b392ab00e11d6b8d95820153b7c9c19fd9913bb73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 23 10:40:09 compute-0 jovial_goldstine[289535]: 167 167
Jan 23 10:40:09 compute-0 systemd[1]: libpod-6a26e42b7453af4fa3c159b392ab00e11d6b8d95820153b7c9c19fd9913bb73f.scope: Deactivated successfully.
Jan 23 10:40:09 compute-0 podman[289519]: 2026-01-23 10:40:09.223920953 +0000 UTC m=+0.122706606 container attach 6a26e42b7453af4fa3c159b392ab00e11d6b8d95820153b7c9c19fd9913bb73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 10:40:09 compute-0 podman[289519]: 2026-01-23 10:40:09.225578181 +0000 UTC m=+0.124363754 container died 6a26e42b7453af4fa3c159b392ab00e11d6b8d95820153b7c9c19fd9913bb73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 23 10:40:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc2b2eb3c65065bbf7f3d3d404396795756d2249e5e078c1e09c762ba847b11-merged.mount: Deactivated successfully.
Jan 23 10:40:09 compute-0 podman[289519]: 2026-01-23 10:40:09.261028436 +0000 UTC m=+0.159813999 container remove 6a26e42b7453af4fa3c159b392ab00e11d6b8d95820153b7c9c19fd9913bb73f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 10:40:09 compute-0 systemd[1]: libpod-conmon-6a26e42b7453af4fa3c159b392ab00e11d6b8d95820153b7c9c19fd9913bb73f.scope: Deactivated successfully.
Jan 23 10:40:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:09 compute-0 podman[289557]: 2026-01-23 10:40:09.425393265 +0000 UTC m=+0.040827361 container create 9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 23 10:40:09 compute-0 systemd[1]: Started libpod-conmon-9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b.scope.
Jan 23 10:40:09 compute-0 systemd[1]: Started libcrun container.
Jan 23 10:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed1f9a96d877e37db4d5975c4b33f023fb683b708022094e3017d170216eeed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:09 compute-0 podman[289557]: 2026-01-23 10:40:09.40603076 +0000 UTC m=+0.021464886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 23 10:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed1f9a96d877e37db4d5975c4b33f023fb683b708022094e3017d170216eeed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed1f9a96d877e37db4d5975c4b33f023fb683b708022094e3017d170216eeed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed1f9a96d877e37db4d5975c4b33f023fb683b708022094e3017d170216eeed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 10:40:09 compute-0 podman[289557]: 2026-01-23 10:40:09.5184568 +0000 UTC m=+0.133890966 container init 9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 10:40:09 compute-0 podman[289557]: 2026-01-23 10:40:09.525065059 +0000 UTC m=+0.140499165 container start 9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 10:40:09 compute-0 podman[289557]: 2026-01-23 10:40:09.529596889 +0000 UTC m=+0.145031015 container attach 9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:40:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:40:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:09] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:40:10 compute-0 lvm[289649]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:40:10 compute-0 lvm[289649]: VG ceph_vg0 finished
Jan 23 10:40:10 compute-0 vigilant_greider[289574]: {}
Jan 23 10:40:10 compute-0 systemd[1]: libpod-9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b.scope: Deactivated successfully.
Jan 23 10:40:10 compute-0 systemd[1]: libpod-9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b.scope: Consumed 1.040s CPU time.
Jan 23 10:40:10 compute-0 podman[289557]: 2026-01-23 10:40:10.187183328 +0000 UTC m=+0.802617454 container died 9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 10:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ed1f9a96d877e37db4d5975c4b33f023fb683b708022094e3017d170216eeed-merged.mount: Deactivated successfully.
Jan 23 10:40:10 compute-0 podman[289557]: 2026-01-23 10:40:10.228291686 +0000 UTC m=+0.843725772 container remove 9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_greider, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 10:40:10 compute-0 systemd[1]: libpod-conmon-9434999e2f6172b718c783b12a416843917e5ab5516a071c4a0ff9a8fe2e7c2b.scope: Deactivated successfully.
Jan 23 10:40:10 compute-0 sudo[289455]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 23 10:40:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 23 10:40:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:10.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:10 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:10 compute-0 sudo[289665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 23 10:40:10 compute-0 sudo[289665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:10 compute-0 sudo[289665]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:11 compute-0 ceph-mon[74335]: pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:11 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:11 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:40:11 compute-0 sudo[289690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:40:11 compute-0 sudo[289690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:11 compute-0 sudo[289690]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:11 compute-0 nova_compute[249229]: 2026-01-23 10:40:11.923 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:40:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:40:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:13.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:13 compute-0 ceph-mon[74335]: pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:13.800Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 23 10:40:13 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:13.801Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:14 compute-0 nova_compute[249229]: 2026-01-23 10:40:14.106 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:14 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:14 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:14 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:14.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:14 compute-0 ceph-mon[74335]: pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:14 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:15 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:15 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:15 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:15 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:15 compute-0 nova_compute[249229]: 2026-01-23 10:40:15.717 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:40:16 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:16 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:16 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:16.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:16 compute-0 nova_compute[249229]: 2026-01-23 10:40:16.926 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:17 compute-0 ceph-mon[74335]: pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Jan 23 10:40:17 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:17 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:17 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:17.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:17 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:17 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:17.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:18 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:18 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:18 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:18.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:18 compute-0 ceph-mon[74335]: pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:18 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:18.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:19 compute-0 nova_compute[249229]: 2026-01-23 10:40:19.107 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:19 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:19 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:19 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:19.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:19 compute-0 podman[289723]: 2026-01-23 10:40:19.557675547 +0000 UTC m=+0.087814687 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 10:40:19 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:19 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:19 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:40:19 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:19] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Optimize plan auto_2026-01-23_10:40:20
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [balancer INFO root] do_upmap
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', '.mgr', 'vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.rgw.root']
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [balancer INFO root] prepared 0/10 upmap changes
Jan 23 10:40:20 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:40:20 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:40:20 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:20 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:20 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:20.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:20 compute-0 nova_compute[249229]: 2026-01-23 10:40:20.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:40:20 compute-0 nova_compute[249229]: 2026-01-23 10:40:20.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 23 10:40:20 compute-0 nova_compute[249229]: 2026-01-23 10:40:20.716 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 23 10:40:20 compute-0 nova_compute[249229]: 2026-01-23 10:40:20.731 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 23 10:40:20 compute-0 nova_compute[249229]: 2026-01-23 10:40:20.731 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:40:20 compute-0 nova_compute[249229]: 2026-01-23 10:40:20.731 249233 DEBUG nova.compute.manager [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:40:20 compute-0 ceph-mgr[74633]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 10:40:21 compute-0 ceph-mon[74335]: pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:21 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:40:21 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:21 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:21 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:21.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:21 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:21 compute-0 nova_compute[249229]: 2026-01-23 10:40:21.930 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:22 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:22 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:40:22 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:40:23 compute-0 ceph-mon[74335]: pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:23 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:23 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:40:23 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:23.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:40:23 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:23 compute-0 nova_compute[249229]: 2026-01-23 10:40:23.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:40:23 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:23.802Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:24 compute-0 nova_compute[249229]: 2026-01-23 10:40:24.109 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:24 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:24 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:24 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:24.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:24 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.154555) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164825154634, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 655, "num_deletes": 251, "total_data_size": 1008306, "memory_usage": 1019840, "flush_reason": "Manual Compaction"}
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164825162931, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 689452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39598, "largest_seqno": 40252, "table_properties": {"data_size": 686359, "index_size": 1001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8261, "raw_average_key_size": 20, "raw_value_size": 679885, "raw_average_value_size": 1712, "num_data_blocks": 43, "num_entries": 397, "num_filter_entries": 397, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164778, "oldest_key_time": 1769164778, "file_creation_time": 1769164825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 8411 microseconds, and 4315 cpu microseconds.
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.162976) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 689452 bytes OK
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.162998) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.164808) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.164824) EVENT_LOG_v1 {"time_micros": 1769164825164820, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.164840) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1004862, prev total WAL file size 1004862, number of live WAL files 2.
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.165524) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323535' seq:72057594037927935, type:22 .. '6D6772737461740031353037' seq:0, type:0; will stop at (end)
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(673KB)], [86(14MB)]
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164825165596, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 15638749, "oldest_snapshot_seqno": -1}
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6920 keys, 11743221 bytes, temperature: kUnknown
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164825249153, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 11743221, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11701811, "index_size": 22994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 183242, "raw_average_key_size": 26, "raw_value_size": 11581671, "raw_average_value_size": 1673, "num_data_blocks": 892, "num_entries": 6920, "num_filter_entries": 6920, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161658, "oldest_key_time": 0, "file_creation_time": 1769164825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfd65f37-5d13-4bd7-9c84-01e95a04d6c8", "db_session_id": "H0542XX9TGHHLXC3GFH0", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.249547) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11743221 bytes
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.251159) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.9 rd, 140.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 14.3 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(39.7) write-amplify(17.0) OK, records in: 7419, records dropped: 499 output_compression: NoCompression
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.251184) EVENT_LOG_v1 {"time_micros": 1769164825251173, "job": 50, "event": "compaction_finished", "compaction_time_micros": 83666, "compaction_time_cpu_micros": 25767, "output_level": 6, "num_output_files": 1, "total_output_size": 11743221, "num_input_records": 7419, "num_output_records": 6920, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164825251787, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164825255604, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.165398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.255753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.255758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.255759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.255760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:40:25 compute-0 ceph-mon[74335]: rocksdb: (Original Log Time 2026/01/23-10:40:25.255762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 10:40:25 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:25 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:25 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:25 compute-0 ceph-mon[74335]: pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:25 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1646194779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:40:25 compute-0 podman[289757]: 2026-01-23 10:40:25.515291796 +0000 UTC m=+0.048127170 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 10:40:25 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:25 compute-0 nova_compute[249229]: 2026-01-23 10:40:25.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:40:25 compute-0 nova_compute[249229]: 2026-01-23 10:40:25.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:40:25 compute-0 nova_compute[249229]: 2026-01-23 10:40:25.737 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:40:25 compute-0 nova_compute[249229]: 2026-01-23 10:40:25.737 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:40:25 compute-0 nova_compute[249229]: 2026-01-23 10:40:25.738 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:40:25 compute-0 nova_compute[249229]: 2026-01-23 10:40:25.738 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 23 10:40:25 compute-0 nova_compute[249229]: 2026-01-23 10:40:25.738 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:40:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:40:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984292967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.222 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.376 249233 WARNING nova.virt.libvirt.driver [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.378 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4500MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.378 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.378 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:40:26 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:26 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:26 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:26.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.451 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.452 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.472 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 23 10:40:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4269113456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:40:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1984292967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:40:26 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1373761955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:40:26 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 23 10:40:26 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247912976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.915 249233 DEBUG oslo_concurrency.processutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.920 249233 DEBUG nova.compute.provider_tree [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed in ProviderTree for provider: a1f82a16-d7e7-4500-99d7-a20de995d9a2 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.933 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.936 249233 DEBUG nova.scheduler.client.report [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Inventory has not changed for provider a1f82a16-d7e7-4500-99d7-a20de995d9a2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.937 249233 DEBUG nova.compute.resource_tracker [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 23 10:40:26 compute-0 nova_compute[249229]: 2026-01-23 10:40:26.938 249233 DEBUG oslo_concurrency.lockutils [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:40:27 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:27 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:27 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:27.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:27 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:27 compute-0 ceph-mon[74335]: pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2247912976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:40:27 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1342687352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 10:40:27 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:27.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:28 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:28 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:28 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:28.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:28 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:28.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:29 compute-0 nova_compute[249229]: 2026-01-23 10:40:29.111 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:29 compute-0 ceph-mon[74335]: pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:29 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:29 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:29 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:29 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:29 compute-0 nova_compute[249229]: 2026-01-23 10:40:29.930 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:40:29 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:29 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:29] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:40:29 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:29] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:40:30 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:30 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:30 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:30.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:31 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:31 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:31 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:31.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:31 compute-0 ceph-mon[74335]: pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:31 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:31 compute-0 sudo[289825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:40:31 compute-0 sudo[289825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:31 compute-0 sudo[289825]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:31 compute-0 nova_compute[249229]: 2026-01-23 10:40:31.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:40:31 compute-0 nova_compute[249229]: 2026-01-23 10:40:31.716 249233 DEBUG oslo_service.periodic_task [None req-0623b36c-8378-484d-a465-75209495a966 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 23 10:40:31 compute-0 nova_compute[249229]: 2026-01-23 10:40:31.937 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:32 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:32 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:32 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:32.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:33 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:33 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:33 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:33.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:33 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:33 compute-0 ceph-mon[74335]: pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:33 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:33.803Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:34 compute-0 nova_compute[249229]: 2026-01-23 10:40:34.113 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:34 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:34 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:34 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:34 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:35 compute-0 ceph-mon[74335]: pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:35 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:40:35 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:40:35 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:35 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:35 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:35.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:35 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:36 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:40:36 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:36 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:36 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:36 compute-0 nova_compute[249229]: 2026-01-23 10:40:36.939 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:37 compute-0 ceph-mon[74335]: pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:37 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:37 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:37 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:37.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:37 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:37 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:37.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:38 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:38 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:38 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:38.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:38 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:38.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:39 compute-0 nova_compute[249229]: 2026-01-23 10:40:39.116 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:39 compute-0 ceph-mon[74335]: pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:39 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:39 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:39 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:39.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:39 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:39 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:39 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:39] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:40:39 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:39] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:40:40 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:40 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:40 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:41 compute-0 ceph-mon[74335]: pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:41 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:41 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:41 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:41.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:41 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:41 compute-0 nova_compute[249229]: 2026-01-23 10:40:41.942 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:42 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:42 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:42 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:42.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:43 compute-0 ceph-mon[74335]: pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:43 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:43 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:43 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:43.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:43 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:43 compute-0 sshd-session[289862]: Accepted publickey for zuul from 192.168.122.10 port 52522 ssh2: ECDSA SHA256:VirhpRcIg3eaQ2of1D68YV1JVeFZwgFg3WdbJHtted4
Jan 23 10:40:43 compute-0 systemd-logind[784]: New session 59 of user zuul.
Jan 23 10:40:43 compute-0 systemd[1]: Started Session 59 of User zuul.
Jan 23 10:40:43 compute-0 sshd-session[289862]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 23 10:40:43 compute-0 sudo[289866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 23 10:40:43 compute-0 sudo[289866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 23 10:40:43 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:43.804Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:44 compute-0 nova_compute[249229]: 2026-01-23 10:40:44.119 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:44 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:44 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:40:44 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:44.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:40:44 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:45 compute-0 ceph-mon[74335]: pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:45 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:45 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:45 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:45.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:45 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:46 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27121 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:46 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27002 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:46 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:46 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:40:46 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:46.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:40:46 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17607 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:46 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27130 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:46 compute-0 nova_compute[249229]: 2026-01-23 10:40:46.944 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:46 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17613 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:46 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27008 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:47 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:47 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:47 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:47.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:47 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 23 10:40:47 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1900352961' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:40:47 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:47 compute-0 ceph-mon[74335]: pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:47 compute-0 ceph-mon[74335]: from='client.27121 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:47 compute-0 ceph-mon[74335]: from='client.27002 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:47 compute-0 ceph-mon[74335]: from='client.17607 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:47 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2902829741' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:40:47 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:47.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:48 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:48 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:48 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:48.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:48 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 23 10:40:48 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1287907540' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:40:48 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:48.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 23 10:40:49 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1287907540' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:40:49 compute-0 ceph-mon[74335]: from='client.27130 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:49 compute-0 ceph-mon[74335]: from='client.17613 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:49 compute-0 ceph-mon[74335]: from='client.27008 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/390486485' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:40:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1900352961' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 10:40:49 compute-0 ceph-mon[74335]: pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:49 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1287907540' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 10:40:49 compute-0 nova_compute[249229]: 2026-01-23 10:40:49.120 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:49 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:49 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:49 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:49.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:49 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:49 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:49 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:49] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:40:49 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:49] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:40:50 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:40:50 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:40:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:40:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:40:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:40:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:40:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 10:40:50 compute-0 ceph-mgr[74633]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 10:40:50 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:50 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:50 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:50.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:50 compute-0 ceph-mon[74335]: from='client.? 192.168.122.10:0/1287907540' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 10:40:50 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:40:50 compute-0 podman[290165]: 2026-01-23 10:40:50.591122675 +0000 UTC m=+0.112288518 container health_status ffd3fd97eb8ccb69f06ea21df042fbfc8784045b6313bea6a684bfa168f1196d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 10:40:50 compute-0 ovs-vsctl[290219]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 23 10:40:51 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:51 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:51 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:51.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:51 compute-0 ceph-mon[74335]: pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:51 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:51 compute-0 sudo[290267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:40:51 compute-0 sudo[290267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:40:51 compute-0 sudo[290267]: pam_unix(sudo:session): session closed for user root
Jan 23 10:40:51 compute-0 virtqemud[248554]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 23 10:40:51 compute-0 virtqemud[248554]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 23 10:40:51 compute-0 virtqemud[248554]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 10:40:51 compute-0 nova_compute[249229]: 2026-01-23 10:40:51.946 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:52 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: cache status {prefix=cache status} (starting...)
Jan 23 10:40:52 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:52 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:52 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:52 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:52.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:52 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27163 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:52 compute-0 ceph-mon[74335]: pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:52 compute-0 ceph-mon[74335]: from='client.27163 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:52 compute-0 lvm[290564]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 10:40:52 compute-0 lvm[290564]: VG ceph_vg0 finished
Jan 23 10:40:52 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: client ls {prefix=client ls} (starting...)
Jan 23 10:40:52 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:52 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27175 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:52 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 23 10:40:52 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17643 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: damage ls {prefix=damage ls} (starting...)
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:53 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27184 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:53 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:53 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:53 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:53.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump loads {prefix=dump loads} (starting...)
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 23 10:40:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803210379' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:53 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:53 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17655 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27032 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:53 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27199 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:40:53 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2749828607' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:40:53 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:53.807Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:53 compute-0 ceph-mon[74335]: from='client.27175 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2735878044' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mon[74335]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mon[74335]: from='client.17643 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mon[74335]: from='client.27184 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2326781007' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2803210379' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 23 10:40:53 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:54 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17667 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 23 10:40:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:40:54 compute-0 nova_compute[249229]: 2026-01-23 10:40:54.122 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:54 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27044 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 23 10:40:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2987158526' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 23 10:40:54 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:54 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 23 10:40:54 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:54 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:54 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:54 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:54.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:54 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17679 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27056 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: ops {prefix=ops} (starting...)
Jan 23 10:40:54 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:54 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27244 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 23 10:40:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1579084197' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.17655 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.27032 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.27199 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4283374481' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2749828607' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.17667 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1569340402' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.27044 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1170917103' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2987158526' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.17679 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1278040154' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2991488001' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1520575926' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:40:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:54 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 23 10:40:54 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948725629' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27068 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27256 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17706 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:55 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:55 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:40:55 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:55.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:40:55 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: session ls {prefix=session ls} (starting...)
Jan 23 10:40:55 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms Can't run that command on an inactive MDS!
Jan 23 10:40:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 23 10:40:55 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275578091' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mds[94628]: mds.cephfs.compute-0.ymknms asok_command: status {prefix=status} (starting...)
Jan 23 10:40:55 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:55 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17721 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 23 10:40:55 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 23 10:40:55 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4141150626' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.27056 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.27244 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1579084197' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1948725629' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2517041095' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.27068 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/431388215' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.27256 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.17706 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2275578091' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2494977871' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2891389420' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/758304118' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2152170927' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mon[74335]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:40:55 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27095 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 23 10:40:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474866121' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:40:56 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27101 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:56 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:56 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:56 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:56.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 23 10:40:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120773140' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:40:56 compute-0 podman[291103]: 2026-01-23 10:40:56.528181084 +0000 UTC m=+0.056057587 container health_status 7c4b1914e1e86e16566f40dac1c2043d119deee57a046ec037c84640bd0c067d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'cdc8d10f0e05d8a70b43cf26938a886cf76be4340fa6a898edc4cc90e10001b1-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-99269140098de15b48680c41e5313433c184a4380a28a4d66e6de0ece8f46703-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 10:40:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 23 10:40:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664983427' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 10:40:56 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27295 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:56 compute-0 ceph-mgr[74633]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:40:56 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T10:40:56.701+0000 7f28655d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:40:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 23 10:40:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:40:56 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 10:40:56 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3387213635' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:40:56 compute-0 nova_compute[249229]: 2026-01-23 10:40:56.949 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:57 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17769 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mgr[74633]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:40:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T10:40:57.071+0000 7f28655d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:40:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 23 10:40:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/246354880' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 10:40:57 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:57 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:57 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:57.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:57 compute-0 ceph-mon[74335]: pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.17721 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4141150626' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.27095 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/858187861' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3865666164' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2474866121' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4002027261' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.27101 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1120773140' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/676404484' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2664983427' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1456795587' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/883122861' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3387213635' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 23 10:40:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687217909' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:57 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17787 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: 2026-01-23T10:40:57.735+0000 7f28655d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:40:57 compute-0 ceph-mgr[74633]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 10:40:57 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 23 10:40:57 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547435586' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:40:57 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27331 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:57 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:57.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 23 10:40:58 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3182871652' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17808 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27346 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:58 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:58 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:58 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:58.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.27295 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.17769 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/365728033' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3781180657' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/246354880' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3056118106' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/529095491' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2687217909' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3547435586' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/823562891' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3096053969' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3182871652' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3408741616' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/458587115' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 23 10:40:58 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271257117' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17820 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27361 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:51.754039+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:52.754255+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:53.754494+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:54.754673+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:55.754782+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:56.754971+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:57.755106+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:58.755240+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:07:59.755418+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:00.755546+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:01.755672+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:02.755878+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:03.756058+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:04.756252+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:05.756421+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:06.756960+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 1859584 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:07.757084+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:08.757202+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:09.757318+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:10.757416+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:11.757538+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:12.757671+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:13.757789+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:14.757948+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:15.758160+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:16.758394+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:17.758511+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:18.758678+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:19.758840+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:20.758982+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:21.759112+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:22.759262+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:23.759410+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8839000 session 0x55c0a9517c20
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:24.759536+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:25.759657+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:26.759836+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:27.759989+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:28.760189+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:29.760325+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:30.760459+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:31.760600+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:32.760757+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:33.760889+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:34.761006+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:35.761137+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.339496613s of 53.345134735s, submitted: 2
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:36.761411+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935842 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:37.761527+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:38.761667+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:39.761839+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:40.762044+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 2007040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:41.762205+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935858 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:42.762492+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:43.762642+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:44.762848+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:45.762981+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:46.763589+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935858 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:47.763744+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 1998848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:48.763879+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.945819855s of 12.972918510s, submitted: 9
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:49.764258+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:50.764425+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:51.764550+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935558 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:52.764665+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:53.764835+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1990656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:54.765117+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:55.765322+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:56.765563+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:57.765728+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:58.765915+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:08:59.766058+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:00.766208+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1982464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:01.766409+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:02.766609+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:03.766782+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:04.766928+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:05.767162+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:06.767406+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:07.767560+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:08.767697+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1974272 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:09.767853+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1966080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:10.768028+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1966080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:11.768178+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7882 writes, 31K keys, 7882 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7882 writes, 1550 syncs, 5.09 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 677 writes, 1212 keys, 677 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 677 writes, 322 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.218       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a51309b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a5131350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:12.768326+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:13.768528+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:14.768674+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:15.768806+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:16.768991+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:17.769162+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:18.769419+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:19.769646+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:20.769973+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:21.770115+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:22.770321+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:23.770526+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:24.770669+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:25.770829+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:26.771020+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:27.771169+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:28.771425+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:29.771676+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:30.771927+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:31.772203+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:32.772387+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:33.772570+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:34.772709+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:35.772837+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:36.773014+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:37.773179+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:38.773446+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:39.773643+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:40.773826+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:41.773961+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:42.774127+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:43.774366+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:44.774559+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:45.774696+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:46.774883+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:47.775015+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:48.775144+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:49.775279+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:50.775410+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:51.775554+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:52.775679+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:53.775846+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:54.775964+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:55.776114+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:56.776337+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:57.776567+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:58.779215+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:09:59.779368+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:00.779516+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:01.779694+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:02.779904+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:03.780046+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:04.780264+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:05.780512+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:06.780699+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:07.780846+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:08.781058+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:09.781272+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8714800 session 0x55c0a87010e0
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:10.781407+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:11.781585+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:12.781769+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:13.781896+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:14.782015+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:15.782179+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:16.782397+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:17.782522+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:18.782643+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:19.782783+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:20.782931+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:21.783052+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:22.783223+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935710 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:23.783370+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34000
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 95.271827698s of 95.274726868s, submitted: 1
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:24.783533+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:25.783694+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:26.783902+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 1933312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:27.784029+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:28.784195+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:29.784332+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 1916928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:30.784631+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:31.784789+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:32.785010+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:33.785209+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:34.785466+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:35.785640+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:36.785846+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:37.786051+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:38.786237+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:39.786433+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:40.786551+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:41.786677+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:42.786825+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:43.787030+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:44.787199+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:45.787405+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:46.787699+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:47.787846+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:48.788054+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:49.788212+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:50.788382+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:51.788605+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:58 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:52.788787+0000)
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:58 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:58 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:58 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17841 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:58 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 23 10:40:58 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254495411' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:40:58 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:40:58.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:53.788966+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:54.789133+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:55.789265+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:56.789514+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:57.790491+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:58.790781+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:10:59.790919+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 1908736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:00.791059+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:01.791190+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:02.791338+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:03.791458+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:04.791628+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:05.791729+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:06.791884+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:07.792005+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:08.792143+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:09.792261+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:10.792396+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:11.792501+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:12.792632+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:13.792749+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:14.792878+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:15.793032+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:16.793203+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:17.793320+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:18.793411+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:19.793526+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:20.793634+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:21.793821+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:22.794081+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:23.794207+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:24.794337+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:25.794530+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:26.794735+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:27.794906+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:28.795031+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 1900544 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:29.795176+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:30.795340+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:31.795575+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:32.795759+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:33.795927+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:34.796072+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:35.796212+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:36.796405+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:37.796526+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:38.796682+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:39.796851+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:40.797041+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:41.797190+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:42.797414+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:43.797556+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:44.797725+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:45.797893+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:46.798115+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:47.798280+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:48.798495+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:49.798659+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:50.798816+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:51.798971+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:52.799132+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a88732c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:53.799280+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a6cbe400 session 0x55c0a78ad2c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:54.799431+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:55.799621+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:56.799849+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 1884160 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:57.799962+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:58.800100+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:11:59.800289+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:00.800426+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:01.800565+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:02.800712+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935726 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 1875968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6cbe400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.687911987s of 99.724082947s, submitted: 12
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:03.800879+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 1867776 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:04.801005+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,1])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 1572864 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:05.801304+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 1400832 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:06.801558+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 1384448 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:07.801787+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937502 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 327680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:08.801942+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 327680 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:09.802077+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 1376256 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:10.802250+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 1359872 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:11.802450+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1335296 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:12.802658+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937486 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:13.802883+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:14.803064+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 1310720 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:15.803188+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.640565872s of 12.511781693s, submitted: 245
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:16.803462+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:17.803625+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939014 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:18.803793+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:19.803984+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:20.804140+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 1253376 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:21.804292+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:22.804560+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 1245184 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:23.804721+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 1236992 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:24.804908+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:25.805125+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:26.805433+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:27.805593+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:28.805774+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:29.805962+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:30.806192+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:31.806315+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 1228800 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:32.806538+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:33.806693+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:34.806877+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:35.807090+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:36.807336+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:37.807568+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 1220608 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:38.807787+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:39.807949+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:40.808398+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:41.808599+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:42.808825+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 ms_handle_reset con 0x55c0a8b34000 session 0x55c0a9519680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:43.808997+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:44.809155+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:45.809389+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 1204224 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:46.809757+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:47.810032+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:48.810239+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:49.810412+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:50.810599+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:51.810753+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:52.810986+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938143 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:53.811141+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.789596558s of 37.869590759s, submitted: 10
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:54.811332+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 1196032 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:55.811541+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:56.811855+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1187840 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:57.812078+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939803 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:58.812213+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:12:59.812414+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:00.812521+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:01.812667+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:02.812892+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939635 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:03.813069+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.943451881s of 10.137590408s, submitted: 10
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:04.813502+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 139264 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:05.813660+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:06.813856+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:07.813989+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939196 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:08.814112+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:09.814306+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:10.814433+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:11.814604+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:12.814745+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:13.814849+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:14.815048+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 131072 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:15.815203+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:16.815437+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:17.815580+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:18.815784+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:19.816326+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:20.816551+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:21.816847+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:22.817033+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:23.817228+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:24.817425+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:25.817644+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:26.817885+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:27.818082+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 122880 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:28.818285+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:29.818423+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:30.818580+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:31.819205+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:32.819368+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:33.820386+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:34.821113+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:35.821317+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:36.821833+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:37.822189+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:38.822577+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:39.822718+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:40.822919+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:41.823232+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:42.823573+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:43.823877+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:44.824043+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:45.824220+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:46.824476+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 114688 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:47.824616+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:48.824884+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:49.825049+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:50.825213+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:51.825453+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:52.825585+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:53.825712+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:54.825896+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:55.826134+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:56.826322+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:57.826491+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:58.826649+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:13:59.826819+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:00.826989+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:01.827157+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:02.827329+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:03.827496+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:04.827690+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:05.827872+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:06.828046+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:07.828199+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:08.828393+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:09.828648+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:10.828803+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:11.828934+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:12.829094+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:13.829195+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:14.829338+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:15.829517+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:16.829698+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:17.829843+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:18.829982+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:19.830205+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:20.830382+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:21.830516+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:22.830679+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:23.830875+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:24.831038+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:25.831169+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:26.831427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:27.831610+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:28.831737+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:29.831898+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:30.832034+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:31.832201+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:32.832337+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:33.832522+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:34.832720+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:35.832909+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:36.834712+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:37.835922+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:38.836169+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:39.836376+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:40.884013+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:41.884889+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:42.885058+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:43.885389+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:44.885569+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:45.885760+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:46.885980+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:47.886270+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:48.886436+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 106496 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:49.886866+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:50.887067+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:51.887276+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:52.887408+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:53.887773+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:54.887936+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:55.888129+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:56.888409+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:57.888663+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:58.888849+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:14:59.889023+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:00.889177+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:01.889406+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:02.889557+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:03.889713+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939064 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:04.889891+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:05.890151+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc66d000/0x0/0x4ffc00000, data 0xebe00/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:06.890380+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:07.890547+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 123.779830933s of 123.839828491s, submitted: 2
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 90112 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:08.890699+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946520 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 81920 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:09.890890+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 8282112 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fc1f5000/0x0/0x4ffc00000, data 0x56002c/0x615000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _renew_subs
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 141 ms_handle_reset con 0x55c0a8839000 session 0x55c0a980a780
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:10.891060+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:11.891167+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:12.891454+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 142 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a7927680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:13.891665+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987291 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:14.891856+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc1ed000/0x0/0x4ffc00000, data 0x56426f/0x61d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:15.892075+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:16.892286+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:17.892501+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:18.892628+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:19.892803+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:20.892946+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:21.893104+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:22.893235+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:23.893327+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:24.893496+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:25.893768+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:26.894215+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:27.894339+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a6cbe400 session 0x55c0a78a8b40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:28.894508+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:29.894656+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:30.894795+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:31.894942+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:32.895094+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:33.895221+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:34.895420+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:35.895589+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:36.895872+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:37.896037+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a8714800 session 0x55c0a689cb40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:38.896419+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989117 data_alloc: 218103808 data_used: 131072
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6cbe400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.314317703s of 31.454385757s, submitted: 41
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:39.897610+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:40.898679+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:41.901797+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:42.902780+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:43.904843+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988425 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:44.905685+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a8838800 session 0x55c0a8cfbc20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:45.905873+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:46.906573+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:47.907575+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0x566241/0x620000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:48.907724+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988557 data_alloc: 218103808 data_used: 126976
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 8249344 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:49.908211+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a980a960
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 ms_handle_reset con 0x55c0a8b34000 session 0x55c0a9519860
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 3588096 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:50.908585+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 3588096 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:51.908807+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.712681770s of 12.734780312s, submitted: 7
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _renew_subs
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 89628672 unmapped: 3530752 heap: 93159424 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:52.908961+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3800 session 0x55c0a89fad20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8b34400 session 0x55c0a7927e00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8838800 session 0x55c0a8cfb0e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3800 session 0x55c0a94b70e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a8701a40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 8953856 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:53.909116+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd89000/0x0/0x4ffc00000, data 0x9c746d/0xa83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8b34000 session 0x55c0a7928d20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045402 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd89000/0x0/0x4ffc00000, data 0x9c746d/0xa83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 8953856 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:54.909438+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 8953856 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:55.909600+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c82000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8c82000 session 0x55c0a90f94a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 8937472 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:56.909893+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3800 session 0x55c0a88723c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 8937472 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:57.910102+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a8cf4b40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd89000/0x0/0x4ffc00000, data 0x9c746d/0xa83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd89000/0x0/0x4ffc00000, data 0x9c746d/0xa83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 8945664 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:58.910250+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8b34000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046235 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 8937472 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:15:59.910434+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:00.910580+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:01.910753+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _renew_subs
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.127929688s of 10.011770248s, submitted: 43
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbd85000/0x0/0x4ffc00000, data 0x9c943f/0xa86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:02.910909+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:03.911069+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080893 data_alloc: 234881024 data_used: 9277440
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:04.911214+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 4710400 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:05.911330+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:06.911611+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbd86000/0x0/0x4ffc00000, data 0x9c943f/0xa86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:07.911828+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbd86000/0x0/0x4ffc00000, data 0x9c943f/0xa86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:08.912089+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079314 data_alloc: 234881024 data_used: 9281536
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:09.912238+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:10.912432+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4694016 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:11.912635+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 2785280 heap: 99459072 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa66f000/0x0/0x4ffc00000, data 0xf3a43f/0xff7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:12.913451+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.363606453s of 10.592675209s, submitted: 82
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101515264 unmapped: 1089536 heap: 102604800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:13.914280+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103063552 unmapped: 589824 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:14.914925+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 581632 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:15.915663+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 581632 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:16.916239+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 581632 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:17.916533+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:18.916769+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:19.917047+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:20.917527+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:21.917798+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:22.918152+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:23.918488+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:24.918687+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:25.918868+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:26.919021+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:27.919177+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:28.919318+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:29.919516+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:30.919649+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 573440 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:31.919796+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103088128 unmapped: 565248 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:32.919957+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103088128 unmapped: 565248 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:33.920259+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103088128 unmapped: 565248 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:34.920429+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103088128 unmapped: 565248 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:35.920547+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 557056 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:36.921130+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 557056 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:37.921272+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 557056 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:38.921454+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 557056 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a980ab40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6890400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136968 data_alloc: 234881024 data_used: 10600448
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a6890400 session 0x55c0a8cfa000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa64c000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:39.921606+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103104512 unmapped: 548864 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:40.921766+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5e400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.849666595s of 28.051139832s, submitted: 24
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101679104 unmapped: 1974272 heap: 103653376 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5e400 session 0x55c0a94db680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:41.921905+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6890400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 7168000 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a6890400 session 0x55c0a8cfa5a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:42.922045+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 7168000 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:43.922204+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 7168000 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159061 data_alloc: 234881024 data_used: 10600448
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:44.922400+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 7536640 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:45.922752+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 7536640 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:46.923189+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:47.923526+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:48.923829+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159061 data_alloc: 234881024 data_used: 10600448
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:49.924063+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a78a9c20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:50.924284+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 7544832 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:51.924426+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 7544832 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:52.924961+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 7544832 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:53.925194+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 7544832 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8ad3800 session 0x55c0a89fa1e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159061 data_alloc: 234881024 data_used: 10600448
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:54.925416+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 7536640 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f3000/0x0/0x4ffc00000, data 0x12bb4a1/0x1379000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:55.925579+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 7536640 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8ad3c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8ad3c00 session 0x55c0a8addc20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5e000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.989857674s of 15.692141533s, submitted: 33
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5e000 session 0x55c0a78ccd20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:56.925963+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 7528448 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a6890400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:57.926174+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101392384 unmapped: 7593984 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:58.926498+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102801408 unmapped: 6184960 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178338 data_alloc: 234881024 data_used: 12828672
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:16:59.926718+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f2000/0x0/0x4ffc00000, data 0x12bb4c4/0x137a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:00.926894+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f2000/0x0/0x4ffc00000, data 0x12bb4c4/0x137a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:01.927044+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:02.927246+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:03.927421+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178338 data_alloc: 234881024 data_used: 12828672
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:04.927642+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:05.927867+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 6004736 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f2000/0x0/0x4ffc00000, data 0x12bb4c4/0x137a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:06.928099+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103014400 unmapped: 5971968 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa2f2000/0x0/0x4ffc00000, data 0x12bb4c4/0x137a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:07.928334+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103030784 unmapped: 5955584 heap: 108986368 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.189620018s of 12.223476410s, submitted: 19
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:08.928507+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 4136960 heap: 110739456 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230102 data_alloc: 234881024 data_used: 13017088
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:09.928682+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9ef7000/0x0/0x4ffc00000, data 0x16b64c4/0x1775000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 5947392 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:10.928805+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 5431296 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b5a000/0x0/0x4ffc00000, data 0x1a524c4/0x1b11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:11.928981+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 5210112 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:12.929167+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 5210112 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:13.929304+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b5a000/0x0/0x4ffc00000, data 0x1a524c4/0x1b11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 5177344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253840 data_alloc: 234881024 data_used: 13660160
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:14.929573+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 5177344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:15.929759+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 5177344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:16.930009+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 6201344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:17.930772+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b3c000/0x0/0x4ffc00000, data 0x1a714c4/0x1b30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 6201344 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a6890400 session 0x55c0a689d680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:18.931065+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c89c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.077943802s of 10.287703514s, submitted: 78
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 6184960 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c89c00 session 0x55c0a78abc20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137898 data_alloc: 234881024 data_used: 10588160
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:19.931243+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:20.931600+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa658000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:21.931757+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:22.932246+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:23.932456+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137898 data_alloc: 234881024 data_used: 10588160
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:24.932599+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:25.933001+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:26.933606+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa658000/0x0/0x4ffc00000, data 0xf5543f/0x1012000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:27.933794+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:28.934128+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.085984230s of 10.180105209s, submitted: 29
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8b34000 session 0x55c0a773dc20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105005056 unmapped: 7839744 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c89000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137314 data_alloc: 234881024 data_used: 10588160
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:29.934263+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c89000 session 0x55c0a73663c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101392384 unmapped: 11452416 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:30.934586+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:31.934803+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:32.935176+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a980bc20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:33.935327+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026788 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:34.935472+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 11706368 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:35.935587+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:36.935762+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:37.935895+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:38.936084+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026788 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:39.936256+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:40.936411+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:41.936796+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:42.936964+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.142122269s of 14.206800461s, submitted: 25
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:43.937149+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026920 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:44.937382+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:45.937541+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 11771904 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:46.937977+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 12189696 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:47.938253+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 12189696 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:48.938563+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 12189696 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:49.938727+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026936 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:50.939018+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:51.939220+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:52.939434+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a980a5a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a8ad9680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ecc00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ecc00 session 0x55c0a8ad90e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89edc00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89edc00 session 0x55c0a9440d20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:53.939607+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a94403c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:54.939795+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026936 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a73683c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 12181504 heap: 112844800 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:55.939964+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.481446266s of 12.513579369s, submitted: 11
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a7369680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 21381120 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:56.940218+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 21381120 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:57.940427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 21381120 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:58.940614+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 21372928 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:17:59.940883+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078832 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 21372928 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a7368000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:00.941151+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ecc00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa99e000/0x0/0x4ffc00000, data 0xc10462/0xcce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 21315584 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:01.941387+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89edc00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 21864448 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:02.941556+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:03.941789+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:04.941969+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127938 data_alloc: 234881024 data_used: 11554816
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:05.942180+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa99e000/0x0/0x4ffc00000, data 0xc10462/0xcce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:06.942413+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 18857984 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa99e000/0x0/0x4ffc00000, data 0xc10462/0xcce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:07.942628+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103497728 unmapped: 18866176 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:08.942840+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103497728 unmapped: 18866176 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:09.943017+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127938 data_alloc: 234881024 data_used: 11554816
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103497728 unmapped: 18866176 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:10.943154+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103497728 unmapped: 18866176 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:11.943435+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa99e000/0x0/0x4ffc00000, data 0xc10462/0xcce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.397214890s of 16.504899979s, submitted: 21
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 17924096 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:12.983317+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa071000/0x0/0x4ffc00000, data 0x153d462/0x15fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a94412c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107651072 unmapped: 14712832 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:13.983493+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 14606336 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:14.983665+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202938 data_alloc: 234881024 data_used: 11608064
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 14557184 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:15.983799+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:16.984044+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:17.984248+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f83000/0x0/0x4ffc00000, data 0x162b462/0x16e9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:18.984416+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:19.984539+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202202 data_alloc: 234881024 data_used: 11612160
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 14467072 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:20.984670+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 15196160 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:21.984815+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 15753216 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:22.984997+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 15753216 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.089375496s of 11.247861862s, submitted: 88
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:23.985218+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f5f000/0x0/0x4ffc00000, data 0x164f462/0x170d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 16146432 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:24.985393+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202094 data_alloc: 234881024 data_used: 11612160
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 16146432 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:25.985543+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 16146432 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:26.985789+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f5f000/0x0/0x4ffc00000, data 0x164f462/0x170d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 16097280 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:27.986095+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 16080896 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:28.986232+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 16080896 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:29.986362+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205214 data_alloc: 234881024 data_used: 11608064
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 16072704 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:30.986855+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 16072704 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f59000/0x0/0x4ffc00000, data 0x1655462/0x1713000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:31.987005+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 16072704 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:32.987221+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f59000/0x0/0x4ffc00000, data 0x1655462/0x1713000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 16072704 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:33.987344+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:34.987541+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205470 data_alloc: 234881024 data_used: 11608064
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:35.987689+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:36.987886+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f56000/0x0/0x4ffc00000, data 0x1658462/0x1716000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:37.988112+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.554450035s of 14.728706360s, submitted: 16
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:38.988437+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 16064512 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:39.988572+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205794 data_alloc: 234881024 data_used: 11620352
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9f56000/0x0/0x4ffc00000, data 0x1658462/0x1716000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 15826944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89edc00 session 0x55c0a8700960
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:40.988717+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 15826944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ecc00 session 0x55c0a89d4960
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:41.988878+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 15826944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:42.989135+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 20619264 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:43.989306+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a94dbc20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:44.989444+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:45.989584+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:46.989767+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:47.989900+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:48.990037+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:49.990189+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:50.990315+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:51.990475+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:52.990607+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:53.990746+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:54.990914+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:55.991044+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:56.991200+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:57.991323+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:58.991451+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:18:59.991584+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:00.991730+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:01.991888+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:02.992068+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:03.992201+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:04.992449+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:05.992590+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:06.992818+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:07.992988+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:08.993137+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:09.993320+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038158 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:10.993453+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 9331 writes, 35K keys, 9331 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9331 writes, 2167 syncs, 4.31 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1449 writes, 4381 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 4.46 MB, 0.01 MB/s
                                           Interval WAL: 1449 writes, 617 syncs, 2.35 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 20946944 heap: 122363904 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:11.993613+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.843894958s of 33.845417023s, submitted: 36
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a8cf4d20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:12.993762+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:13.993884+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:14.994034+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081228 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:15.994244+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:16.994493+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faaad000/0x0/0x4ffc00000, data 0xb0243f/0xbbf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faaad000/0x0/0x4ffc00000, data 0xb0243f/0xbbf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:17.994662+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 24961024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:18.994849+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a8ad0d20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101933056 unmapped: 24633344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:19.994981+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8882c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085381 data_alloc: 218103808 data_used: 4796416
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 101801984 unmapped: 24764416 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:20.995171+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:21.995407+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:22.995610+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa88000/0x0/0x4ffc00000, data 0xb26462/0xbe4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:23.995780+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa88000/0x0/0x4ffc00000, data 0xb26462/0xbe4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:24.995918+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117757 data_alloc: 234881024 data_used: 9527296
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:25.996071+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:26.996245+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:27.996428+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:28.996584+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:29.996699+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117757 data_alloc: 234881024 data_used: 9527296
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa88000/0x0/0x4ffc00000, data 0xb26462/0xbe4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:30.996841+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:31.996991+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:32.997162+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:33.997311+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:34.997403+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119885 data_alloc: 234881024 data_used: 9584640
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 22585344 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:35.997538+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa88000/0x0/0x4ffc00000, data 0xb26462/0xbe4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.795618057s of 23.862209320s, submitted: 13
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 21291008 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:36.997693+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105275392 unmapped: 21291008 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:37.997829+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:38.997953+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 21168128 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa828000/0x0/0x4ffc00000, data 0xd86462/0xe44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:39.998111+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144469 data_alloc: 234881024 data_used: 9867264
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:40.998269+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:41.998405+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:42.998633+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:43.998769+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:44.998937+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144469 data_alloc: 234881024 data_used: 9867264
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:45.999129+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:46.999380+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:47.999583+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:48.999818+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:49.999987+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144469 data_alloc: 234881024 data_used: 9867264
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:51.000136+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:52.000292+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:53.000435+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:54.000558+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:55.000757+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 21405696 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144469 data_alloc: 234881024 data_used: 9867264
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883c00 session 0x55c0a8ad1a40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a8cfb680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a9441680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883c00 session 0x55c0a8546b40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:56.000962+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105144320 unmapped: 21422080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e462/0xe4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:57.001213+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105144320 unmapped: 21422080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:58.001456+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105144320 unmapped: 21422080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.838689804s of 21.900722504s, submitted: 29
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:19:59.002246+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 105144320 unmapped: 21422080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0xd8e48b/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a78c0780
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b748b/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:00.002482+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ecc00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106749952 unmapped: 19816448 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ecc00 session 0x55c0a9529c20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192758 data_alloc: 234881024 data_used: 9867264
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:01.002940+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106758144 unmapped: 19808256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:02.003078+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106758144 unmapped: 19808256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b74c4/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a8cfa960
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:03.003445+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 20258816 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:04.003604+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 20258816 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:05.004064+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b74c4/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 19873792 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237059 data_alloc: 234881024 data_used: 15400960
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:06.004281+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:07.004492+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b74c4/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:08.004951+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:09.005392+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f6000/0x0/0x4ffc00000, data 0x13b74c4/0x1476000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:10.005775+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 16564224 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237059 data_alloc: 234881024 data_used: 15400960
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:11.006445+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.794046402s of 12.929645538s, submitted: 35
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 16523264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:12.008223+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 16523264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:13.008436+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 16523264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f4000/0x0/0x4ffc00000, data 0x13b84c4/0x1477000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:14.008598+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 16523264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa1f4000/0x0/0x4ffc00000, data 0x13b84c4/0x1477000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:15.008766+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111370240 unmapped: 15196160 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274463 data_alloc: 234881024 data_used: 16379904
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:16.008932+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 115621888 unmapped: 10944512 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:17.009112+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 10412032 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:18.009273+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 10412032 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9da6000/0x0/0x4ffc00000, data 0x18074c4/0x18c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:19.009446+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:20.009618+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288467 data_alloc: 234881024 data_used: 17231872
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:21.009768+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:22.009909+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9da6000/0x0/0x4ffc00000, data 0x18074c4/0x18c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:23.010056+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:24.010180+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.063606262s of 13.290143013s, submitted: 63
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:25.010336+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 10379264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883c00 session 0x55c0a94db0e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288567 data_alloc: 234881024 data_used: 17240064
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a9529680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a9528d20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:26.011048+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 16154624 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a89d4d20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa4df000/0x0/0x4ffc00000, data 0xd8f462/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:27.011457+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:28.011639+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:29.011914+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:30.012095+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa4df000/0x0/0x4ffc00000, data 0xd8f462/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a8adda40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8882c00 session 0x55c0a99310e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151023 data_alloc: 218103808 data_used: 9027584
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:31.012348+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 16130048 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:32.012632+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a8cfa780
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:33.013566+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:34.013729+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:35.013848+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051737 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:36.013984+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.560731888s of 12.156168938s, submitted: 49
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:37.014483+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:38.014622+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:39.015282+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 18800640 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:40.015521+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107782144 unmapped: 18784256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051593 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:41.015653+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107782144 unmapped: 18784256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:42.015886+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107782144 unmapped: 18784256 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:43.016053+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:44.016207+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:45.016524+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050834 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:46.016650+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:47.016926+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:48.017112+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.856632233s of 12.034231186s, submitted: 10
Jan 23 10:40:59 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27385 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:49.017320+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:50.017501+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050263 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:51.017656+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:52.017791+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:53.017936+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:54.018064+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:55.018394+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050263 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:56.018532+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:57.018736+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:58.018882+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:20:59.019084+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:00.019208+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050263 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:01.019362+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:02.019584+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:03.019783+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:04.019964+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 18776064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:05.020123+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.572757721s of 16.855772018s, submitted: 2
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 18563072 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067963 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:06.020285+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 18554880 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:07.020557+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0x70643f/0x7c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 18554880 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:08.020697+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a7369680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:09.020933+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:10.021077+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067963 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a95170e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:11.021258+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a95165a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:12.021409+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0x70643f/0x7c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8882c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8882c00 session 0x55c0a9516d20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:13.021528+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a9517860
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:14.021743+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 18571264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea8000/0x0/0x4ffc00000, data 0x70644f/0x7c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:15.021946+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073425 data_alloc: 218103808 data_used: 5316608
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:16.022110+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea8000/0x0/0x4ffc00000, data 0x70644f/0x7c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:17.022411+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:18.022558+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:19.022720+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:20.022933+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073425 data_alloc: 218103808 data_used: 5316608
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:21.023136+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea8000/0x0/0x4ffc00000, data 0x70644f/0x7c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:22.023261+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:23.023416+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8838800 session 0x55c0a94db2c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:24.023638+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 18513920 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:25.023776+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faea8000/0x0/0x4ffc00000, data 0x70644f/0x7c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 18505728 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.093435287s of 20.120613098s, submitted: 9
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087727 data_alloc: 218103808 data_used: 5349376
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:26.023907+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 17842176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:27.024440+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 17793024 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:28.024722+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 17408000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:29.024904+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 17408000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:30.025108+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109158400 unmapped: 17408000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122385 data_alloc: 218103808 data_used: 5582848
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:31.025240+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9e2000/0x0/0x4ffc00000, data 0xbb244f/0xc70000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 17399808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:32.025458+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 17399808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:33.025622+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 18046976 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:34.025840+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 18046976 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:35.026003+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 18046976 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117053 data_alloc: 218103808 data_used: 5582848
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:36.026889+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f9000/0x0/0x4ffc00000, data 0xbb544f/0xc73000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 18046976 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:37.027118+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f9000/0x0/0x4ffc00000, data 0xbb544f/0xc73000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.935380936s of 12.115594864s, submitted: 71
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:38.027446+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:39.027618+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:40.028530+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f9000/0x0/0x4ffc00000, data 0xbb544f/0xc73000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117069 data_alloc: 218103808 data_used: 5578752
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:41.028823+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:42.029137+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:43.029301+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 18038784 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:44.029471+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8838000 session 0x55c0a78a9e00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:45.029656+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f9000/0x0/0x4ffc00000, data 0xbb544f/0xc73000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117069 data_alloc: 218103808 data_used: 5578752
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:46.029787+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a5ebb400 session 0x55c0a66c2f00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:47.029994+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:48.030139+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:49.030277+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:50.030730+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f8000/0x0/0x4ffc00000, data 0xbb644f/0xc74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117277 data_alloc: 218103808 data_used: 5582848
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:51.030977+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.792579651s of 13.853042603s, submitted: 8
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:52.031138+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8d7b000 session 0x55c0a6aaf0e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8882c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:53.031267+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a90f81e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a8587860
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a78bf860
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 18030592 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78cc000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8742000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:54.031559+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8742000 session 0x55c0a78aaf00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a9519a40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78a9e00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a78a9c20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a78a8b40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:55.031795+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e45e/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:56.031966+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140291 data_alloc: 218103808 data_used: 5582848
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:57.032168+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c89000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c89000 session 0x55c0a78adc20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xd8e45e/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:58.032343+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a78ac3c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78ad2c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:21:59.032642+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a956e1e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 18522112 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:00.032831+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 18497536 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:01.032987+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154601 data_alloc: 218103808 data_used: 7262208
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8d7b800 session 0x55c0a78be000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 18497536 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:02.033238+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 18497536 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0xd8e46e/0xe4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:03.033444+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 18497536 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:04.033634+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.119801521s of 13.166754723s, submitted: 15
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108093440 unmapped: 18472960 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:05.033762+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 18350080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:06.033884+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154993 data_alloc: 218103808 data_used: 7430144
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 18186240 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:07.034064+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0xd8e46e/0xe4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 18112512 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:08.034252+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 108453888 unmapped: 18112512 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:09.034442+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 17063936 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:10.034571+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 17063936 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:11.034731+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157705 data_alloc: 218103808 data_used: 7430144
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa81b000/0x0/0x4ffc00000, data 0xd8f46e/0xe4f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 11935744 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:12.034878+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:13.035052+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:14.035216+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:15.035410+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99ea000/0x0/0x4ffc00000, data 0x1bc146e/0x1c81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:16.035544+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a8ad5a40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276755 data_alloc: 218103808 data_used: 9060352
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:17.035707+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 10248192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:18.035889+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.644134521s of 13.565666199s, submitted: 360
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99e9000/0x0/0x4ffc00000, data 0x1bc346e/0x1c83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 11608064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:19.036009+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 11608064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99e9000/0x0/0x4ffc00000, data 0x1bc346e/0x1c83000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:20.036143+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 11608064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:21.036277+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268627 data_alloc: 218103808 data_used: 9060352
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 11608064 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:22.036666+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99e8000/0x0/0x4ffc00000, data 0x1bc446e/0x1c84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 11599872 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:23.036822+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f99e8000/0x0/0x4ffc00000, data 0x1bc446e/0x1c84000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 11599872 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:24.036969+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 11599872 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:25.037107+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 11599872 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:26.037254+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268691 data_alloc: 218103808 data_used: 9060352
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883800 session 0x55c0a90f81e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a8700780
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 11591680 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:27.037467+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a5ece3c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:28.037639+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:29.037805+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.570550919s of 11.678001404s, submitted: 20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:30.037968+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:31.038133+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130310 data_alloc: 218103808 data_used: 5578752
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:32.038293+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:33.038462+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:34.038620+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:35.038788+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:36.038937+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130310 data_alloc: 218103808 data_used: 5578752
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:37.039133+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:38.039289+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:39.039426+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14213120 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:40.039553+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.610730171s of 10.671720505s, submitted: 9
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 14475264 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:41.039653+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a87005a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a89fb680
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130682 data_alloc: 218103808 data_used: 5578752
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9f5000/0x0/0x4ffc00000, data 0xbb944f/0xc77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 14467072 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:42.039769+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a73663c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:43.039864+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:44.039967+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:45.040088+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:46.040246+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:47.040431+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:48.040563+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 14786560 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:49.040692+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:50.040788+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:51.040935+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:52.041058+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:53.041247+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:54.041418+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:55.041552+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:56.041689+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 14778368 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:57.041926+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:58.042048+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a6cbe400 session 0x55c0a956fe00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:22:59.042158+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:00.042278+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:01.042442+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:02.042576+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:03.042707+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:04.042850+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 14770176 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:05.042980+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:06.043110+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073177 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:07.043269+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:08.043458+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:09.043628+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fadca000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.895616531s of 28.939466476s, submitted: 15
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:10.043768+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:11.043947+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072077 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 14761984 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:12.044093+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8714c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8714c00 session 0x55c0a78ab860
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:13.044237+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 14573568 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:14.044424+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faf2f000/0x0/0x4ffc00000, data 0x68043f/0x73d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 14573568 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faf2f000/0x0/0x4ffc00000, data 0x68043f/0x73d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:15.044592+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:16.044744+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faf2f000/0x0/0x4ffc00000, data 0x68043f/0x73d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082839 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec400 session 0x55c0a89fc780
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:17.045041+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a89ec800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:18.045260+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:19.045402+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 14581760 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:20.045539+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 14573568 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faf2e000/0x0/0x4ffc00000, data 0x680462/0x73e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:21.045669+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 14655488 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084189 data_alloc: 218103808 data_used: 4796416
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:22.045797+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 14655488 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a89ec800 session 0x55c0a90f8f00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:23.045919+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 14655488 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.563888550s of 13.634863853s, submitted: 20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a94da780
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:24.046042+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:25.046218+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:26.046384+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075050 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:27.046593+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:28.047551+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111919104 unmapped: 14647296 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:29.047739+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:30.047880+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:31.048088+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075050 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:32.048263+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:33.048437+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:34.048593+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:35.048751+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:36.048913+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111927296 unmapped: 14639104 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075050 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:37.049134+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 14630912 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.995434761s of 14.031690598s, submitted: 13
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a66c3c20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:38.049260+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112312320 unmapped: 14254080 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:39.049414+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:40.049568+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:41.049716+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a7366b40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082456 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78c14a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8d7b800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8d7b800 session 0x55c0a95165a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:42.049921+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2400 session 0x55c0a78cbc20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a689c1e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:43.050093+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:44.050225+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:45.050444+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 14245888 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:46.051127+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 14270464 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082760 data_alloc: 218103808 data_used: 4825088
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:47.051328+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 14270464 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fafdd000/0x0/0x4ffc00000, data 0x5d243f/0x68f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:48.051449+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 14270464 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a8ad9e00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:49.051651+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 14270464 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.093137741s of 12.102007866s, submitted: 5
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:50.051913+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111493120 unmapped: 15073280 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a8cfab40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: mgrc ms_handle_reset ms_handle_reset con 0x55c0a8c5ec00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4198923246
Jan 23 10:40:59 compute-0 ceph-osd[82641]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4198923246,v1:192.168.122.100:6801/4198923246]
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: get_auth_request con 0x55c0a8883800 auth_method 0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: mgrc handle_mgr_configure stats_period=5
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:51.052153+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075806 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:52.052404+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:53.052509+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:54.052692+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a956f0e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:55.052845+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:56.052990+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075806 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:57.053190+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:58.053309+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 15368192 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:23:59.053491+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:00.053716+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:01.053952+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075806 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:02.054166+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:03.054282+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:04.054454+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:05.054593+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8d7b800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.253648758s of 16.266384125s, submitted: 4
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:06.054850+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075938 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:07.055121+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 15360000 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:08.055271+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:09.055468+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:10.055652+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:11.055829+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075822 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:12.056006+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:13.056144+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:14.056328+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:15.056514+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:16.056683+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111214592 unmapped: 15351808 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075822 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:17.056818+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:18.056935+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:19.057084+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.407391548s of 13.959419250s, submitted: 5
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:20.057220+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:21.057444+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075690 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:22.057605+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:23.057701+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:24.058508+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:25.058655+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 15343616 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:26.058794+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 15335424 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075690 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:27.058975+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a95185a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a980b4a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a73692c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a8546d20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 15335424 heap: 126566400 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a6aae960
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c88000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c88000 session 0x55c0a95292c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a94c2f00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a94da000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a7368000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:28.059120+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:29.059296+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:30.059464+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa95a000/0x0/0x4ffc00000, data 0xc5444f/0xd12000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:31.059599+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133978 data_alloc: 218103808 data_used: 4788224
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:32.059767+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 20684800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa95a000/0x0/0x4ffc00000, data 0xc5444f/0xd12000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:33.059905+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.359125137s of 13.556776047s, submitted: 14
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20676608 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:34.060048+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20676608 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a87014a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2c00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:35.060175+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20676608 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:36.060328+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20676608 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135247 data_alloc: 218103808 data_used: 4796416
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:37.060542+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 20512768 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:38.060684+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa959000/0x0/0x4ffc00000, data 0xc54472/0xd13000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:39.060822+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa959000/0x0/0x4ffc00000, data 0xc54472/0xd13000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:40.061000+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:41.061881+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183583 data_alloc: 234881024 data_used: 11939840
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:42.062017+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:43.062156+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa959000/0x0/0x4ffc00000, data 0xc54472/0xd13000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:44.062306+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:45.062702+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa959000/0x0/0x4ffc00000, data 0xc54472/0xd13000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:46.062829+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183583 data_alloc: 234881024 data_used: 11939840
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:47.063033+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 17432576 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:48.063227+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.850227356s of 15.030684471s, submitted: 4
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 13516800 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:49.063641+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 12918784 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:50.063897+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0d8000/0x0/0x4ffc00000, data 0x10bd472/0x117c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,1,0,7,2])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 14065664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:51.065830+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 14065664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232297 data_alloc: 234881024 data_used: 13488128
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:52.066583+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:53.068094+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:54.068277+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:55.068491+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:56.069309+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0ce000/0x0/0x4ffc00000, data 0x10c7472/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235945 data_alloc: 234881024 data_used: 13746176
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:57.069745+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:58.069912+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:24:59.070460+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:00.071029+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 13950976 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0ce000/0x0/0x4ffc00000, data 0x10c7472/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:01.071187+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0ce000/0x0/0x4ffc00000, data 0x10c7472/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 13885440 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235945 data_alloc: 234881024 data_used: 13746176
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa0ce000/0x0/0x4ffc00000, data 0x10c7472/0x1186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:02.071397+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2c00 session 0x55c0a90f9e00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 13885440 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.456407547s of 14.740316391s, submitted: 71
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:03.071655+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a94c2780
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:04.071973+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:05.072188+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:06.072376+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083725 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:07.072614+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:08.072773+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:09.073008+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:10.073266+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:11.073451+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083725 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:12.073582+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:13.073758+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a8cf4b40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:14.073968+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:15.074120+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:16.074248+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083725 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:17.074550+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:18.074752+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:19.074974+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:20.075166+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:21.075396+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083725 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:22.075544+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:23.075711+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:24.075916+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112246784 unmapped: 20135936 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.819961548s of 21.865715027s, submitted: 18
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a78a8b40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:25.076039+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a6aafa40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:26.076298+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138998 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:27.076933+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:28.077105+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:29.077467+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:30.077602+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 20209664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:31.077719+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 20144128 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172894 data_alloc: 234881024 data_used: 9756672
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [1])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:32.077870+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:33.078031+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:34.078177+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:35.078340+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:36.078548+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193870 data_alloc: 234881024 data_used: 12111872
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:37.078744+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:38.078905+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:39.079047+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:40.079252+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa468000/0x0/0x4ffc00000, data 0xd364a1/0xdf4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 18022400 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:41.079395+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.707281113s of 16.824014664s, submitted: 24
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114368512 unmapped: 18014208 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193738 data_alloc: 234881024 data_used: 12111872
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:42.079553+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114376704 unmapped: 18006016 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:43.079680+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 14639104 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:44.079831+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa010000/0x0/0x4ffc00000, data 0x118e4a1/0x124c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 14639104 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:45.079968+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af3000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af3000 session 0x55c0a66c23c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:46.080084+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9bc2000/0x0/0x4ffc00000, data 0x15dc4a1/0x169a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268072 data_alloc: 234881024 data_used: 12111872
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:47.080298+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:48.080525+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9bc2000/0x0/0x4ffc00000, data 0x15dc4a1/0x169a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:49.080728+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:50.080936+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:51.081206+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 14401536 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266352 data_alloc: 234881024 data_used: 12115968
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:52.081343+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.506553650s of 10.660974503s, submitted: 56
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a78aaf00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 14065664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:53.081647+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 14065664 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:54.081934+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b9b000/0x0/0x4ffc00000, data 0x16034a1/0x16c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:55.082227+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:56.082477+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293193 data_alloc: 234881024 data_used: 15368192
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:57.082677+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:58.082879+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b9b000/0x0/0x4ffc00000, data 0x16034a1/0x16c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120659968 unmapped: 11722752 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:25:59.083153+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b9b000/0x0/0x4ffc00000, data 0x16034a1/0x16c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:00.083451+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:01.083579+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293193 data_alloc: 234881024 data_used: 15368192
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:02.083691+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:03.083889+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 11689984 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b9b000/0x0/0x4ffc00000, data 0x16034a1/0x16c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:04.084148+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.563364029s of 12.585421562s, submitted: 7
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 10600448 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:05.084290+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 9797632 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:06.084501+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123068416 unmapped: 9314304 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:07.084718+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332875 data_alloc: 234881024 data_used: 15872000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97b9000/0x0/0x4ffc00000, data 0x19df4a1/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:08.084901+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:09.085032+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:10.085138+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:11.085338+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:12.085547+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332875 data_alloc: 234881024 data_used: 15872000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x19ef4a1/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:13.085683+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:14.085902+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:15.086274+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:16.086555+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.927405357s of 11.535178185s, submitted: 52
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:17.086751+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333043 data_alloc: 234881024 data_used: 15872000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x19ef4a1/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:18.087001+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:19.087325+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:20.087502+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:21.087646+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:22.087893+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333043 data_alloc: 234881024 data_used: 15872000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:23.088102+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 9297920 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x19ef4a1/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:24.088304+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a78cd4a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a95290e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 10067968 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:25.088471+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 10067968 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:26.088638+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f97af000/0x0/0x4ffc00000, data 0x19ef4a1/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,5])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.795674324s of 10.131553650s, submitted: 3
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 12763136 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:27.088992+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241079 data_alloc: 234881024 data_used: 12226560
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119627776 unmapped: 12754944 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:28.089147+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:29.089427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9fee000/0x0/0x4ffc00000, data 0x11b04a1/0x126e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a9528780
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:30.089648+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:31.089787+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:32.089974+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234211 data_alloc: 234881024 data_used: 12115968
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:33.090112+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:34.090317+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9fee000/0x0/0x4ffc00000, data 0x11b04a1/0x126e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:35.090520+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:36.090690+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:37.090963+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234211 data_alloc: 234881024 data_used: 12115968
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9fee000/0x0/0x4ffc00000, data 0x11b04a1/0x126e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:38.091174+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:39.091430+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:40.091630+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.307956696s of 13.755240440s, submitted: 22
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a7926d20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:41.091761+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:42.091981+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234139 data_alloc: 234881024 data_used: 12115968
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:43.092193+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8c5f400
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 12738560 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9fee000/0x0/0x4ffc00000, data 0x11b04a1/0x126e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:44.092446+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:45.092642+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:46.092978+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:47.093231+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8c5f400 session 0x55c0a78cbe00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:48.093463+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:49.093657+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:50.093795+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:51.093921+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:52.094072+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:53.094223+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:54.094343+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:55.094499+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:56.094654+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:57.094827+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:58.094944+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:26:59.095086+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:00.095211+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:01.095419+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:02.095572+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 nova_compute[249229]: 2026-01-23 10:40:59.125 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:03.095799+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:04.095954+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:05.096137+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:06.096391+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:07.096647+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:08.096802+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:09.096999+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:10.097209+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:11.097370+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:12.097518+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:13.097695+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:14.097877+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:15.098087+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:16.098248+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:17.098433+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:18.098622+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:19.101838+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:20.101990+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:21.102133+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:22.102290+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095163 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:23.102586+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:24.102758+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 18194432 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:25.102977+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.846614838s of 44.900455475s, submitted: 16
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 18030592 heap: 132382720 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fab1f000/0x0/0x4ffc00000, data 0x68043f/0x73d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:26.103155+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a87012c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa28f000/0x0/0x4ffc00000, data 0xf1043f/0xfcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:27.103384+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166153 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:28.103547+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa28f000/0x0/0x4ffc00000, data 0xf1043f/0xfcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:29.103709+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a8ad63c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8883000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8883000 session 0x55c0a85194a0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:30.103874+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a6aaf0e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:31.104013+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:32.104143+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166153 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa28f000/0x0/0x4ffc00000, data 0xf1043f/0xfcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:33.104328+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:34.104508+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 29532160 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2800 session 0x55c0a6aafa40
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:35.104641+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 29376512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:36.104745+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 29360128 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:37.104929+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 29360128 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170403 data_alloc: 218103808 data_used: 4796416
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:38.105091+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa26a000/0x0/0x4ffc00000, data 0xf3444f/0xff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 29745152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:39.105259+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:40.105441+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:41.105554+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa26a000/0x0/0x4ffc00000, data 0xf3444f/0xff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:42.105660+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230899 data_alloc: 234881024 data_used: 13778944
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:43.105791+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:44.105982+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:45.106137+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa26a000/0x0/0x4ffc00000, data 0xf3444f/0xff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:46.106259+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:47.106461+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117792768 unmapped: 26140672 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230899 data_alloc: 234881024 data_used: 13778944
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:48.106616+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 26132480 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa26a000/0x0/0x4ffc00000, data 0xf3444f/0xff2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:49.106776+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 26116096 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.689573288s of 24.789501190s, submitted: 12
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:50.106922+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa008000/0x0/0x4ffc00000, data 0x119644f/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:51.107060+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:52.107210+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251219 data_alloc: 234881024 data_used: 13832192
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:53.107427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa008000/0x0/0x4ffc00000, data 0x119644f/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:54.107584+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:55.107732+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 22470656 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:56.107881+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa008000/0x0/0x4ffc00000, data 0x119644f/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 23355392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:57.108090+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 23355392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283733 data_alloc: 234881024 data_used: 13811712
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:58.108263+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 23355392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:27:59.108477+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 23355392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b7c000/0x0/0x4ffc00000, data 0x162244f/0x16e0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:00.108628+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.881953239s of 10.221186638s, submitted: 45
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 23330816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:01.108796+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 22953984 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:02.109223+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121143296 unmapped: 22790144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294573 data_alloc: 234881024 data_used: 14086144
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:03.109398+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 22740992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:04.110042+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 22740992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b44000/0x0/0x4ffc00000, data 0x165144f/0x170f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:05.110169+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:06.110316+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:07.110914+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295789 data_alloc: 234881024 data_used: 14213120
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b44000/0x0/0x4ffc00000, data 0x165144f/0x170f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:08.111208+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x165444f/0x1712000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:09.111414+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:10.111562+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 22708224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8839000 session 0x55c0a78ad0e0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a886f800 session 0x55c0a89faf00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:11.111713+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af3800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.106009483s of 11.168287277s, submitted: 28
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:12.111929+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111229 data_alloc: 218103808 data_used: 4902912
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af3800 session 0x55c0a89fb2c0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:13.112087+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x59043f/0x64d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:14.112223+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:15.112379+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:16.112501+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:17.112660+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:18.112837+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:19.113066+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:20.113228+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:21.113420+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:22.113566+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:23.113698+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:24.113836+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:25.114045+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:26.114228+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:27.114547+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:28.114690+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:29.115066+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:30.115394+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:31.115589+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:32.115779+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:33.115936+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:34.116114+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:35.116240+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:36.116440+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:37.116915+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:38.117179+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:39.117397+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:40.117567+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:41.117692+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:42.117893+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:43.118029+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:44.118196+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:45.118400+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:46.118577+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:47.118965+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:48.119139+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:49.119275+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:50.119507+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:51.119727+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:52.120012+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:53.120255+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:54.120413+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:55.120558+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:56.120814+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:57.121050+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:58.121200+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:28:59.121404+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:00.121558+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:01.121743+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:02.121913+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:03.122042+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:04.122221+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:05.122401+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:06.122557+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:07.122754+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:08.122936+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:09.123076+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:10.123223+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:11.123374+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 2975 syncs, 3.79 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1946 writes, 6900 keys, 1946 commit groups, 1.0 writes per commit group, ingest: 8.66 MB, 0.01 MB/s
                                           Interval WAL: 1946 writes, 808 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:12.123593+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:13.123736+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:14.123877+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:15.124025+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:16.124297+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:17.124601+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:18.124806+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:19.124977+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:20.125123+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:21.125298+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:22.125497+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:23.125670+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:24.125906+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:25.126068+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:26.126293+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:27.126566+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:28.126785+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:29.126975+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:30.127176+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:31.127333+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:32.127471+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:33.127602+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:34.127812+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:35.127960+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:36.128086+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:37.128259+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:38.128397+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:39.128531+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:40.128658+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 27328512 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:41.128764+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 27344896 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'config diff' '{prefix=config diff}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:42.128891+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'config show' '{prefix=config show}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:43.129037+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 27336704 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:44.129174+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'log dump' '{prefix=log dump}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 127647744 unmapped: 16285696 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:45.129307+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'perf dump' '{prefix=perf dump}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'perf schema' '{prefix=perf schema}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 27467776 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:46.129515+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 27467776 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:47.129649+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 27467776 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:48.129793+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 27467776 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:49.129920+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 27467776 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:50.130043+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 27467776 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:51.130182+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27459584 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:52.130317+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27459584 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:53.130495+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27459584 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:54.130649+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27459584 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:55.130756+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27459584 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:56.130903+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27459584 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:57.131088+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27459584 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:58.131196+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:29:59.131339+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27459584 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:00.131492+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27459584 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:01.131631+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 27451392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 27451392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:02.176007+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 27451392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:03.176138+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 27451392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:04.176285+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:05.176407+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 27451392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:06.176560+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 27451392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:07.176767+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 27451392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:08.176931+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 27451392 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:09.177101+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 27443200 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:10.177224+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 27443200 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:11.177337+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 27443200 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:12.177512+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 27443200 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:13.177636+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 27443200 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:14.191456+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 27443200 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:15.191627+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 27443200 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:16.191774+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 27443200 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:17.192408+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:18.192558+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:19.192719+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:20.192854+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:21.192989+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:22.193119+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:23.193272+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:24.193463+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:25.193687+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:26.194372+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:27.194525+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:28.194695+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:29.194847+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:30.195006+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 27435008 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:31.195132+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 27426816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:32.195251+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 27426816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:33.195397+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 27426816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:34.195632+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 27426816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:35.195785+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 27426816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:36.195940+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 27426816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:37.196166+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 27426816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:38.196414+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 27426816 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:39.196571+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:40.196796+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:41.196991+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:42.197190+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:43.197342+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:44.198254+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:45.199151+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:46.199606+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 27418624 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:47.200478+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 27410432 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:48.200703+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 27410432 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:49.201330+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 27410432 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:50.202094+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 27410432 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:51.202479+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 27410432 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:52.202674+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 27410432 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:53.202791+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 27410432 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:54.203007+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 27410432 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:55.203251+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:56.203643+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:57.203838+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:58.204062+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:30:59.204441+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:00.204894+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:01.205241+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:02.205466+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:03.205727+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:04.206075+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:05.206474+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:06.206595+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:07.206869+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:08.207077+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:09.207288+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:10.207531+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 27402240 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:11.207693+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 27394048 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:12.207938+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 27394048 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:13.208094+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 27394048 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:14.208236+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 27394048 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:15.208460+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 27394048 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:16.208586+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 27394048 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:17.208847+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 27394048 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:18.209462+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 27394048 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:19.209886+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:20.210296+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:21.210699+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:22.211141+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:23.211439+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:24.211601+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:25.211922+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:26.212116+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:27.212297+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:28.212424+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:29.212613+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:30.212790+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:31.213034+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:32.213250+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 27385856 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:33.213457+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:34.213643+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:35.213790+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:36.214017+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:37.214297+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:38.214511+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:39.214733+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:40.214886+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:41.215069+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:42.215199+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:43.215373+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:44.215844+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:45.216091+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:46.216237+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 27377664 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:47.216440+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:48.216605+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:49.216741+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:50.216904+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:51.217059+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:52.217220+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:53.217440+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:54.217618+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:55.217753+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:56.217993+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:57.218205+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:58.218423+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:31:59.218550+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:00.218709+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 27369472 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:01.218839+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:02.218971+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:03.219096+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:04.219251+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 233.604873657s of 233.627273560s, submitted: 11
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 27361280 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:05.219427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,1])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 27353088 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:06.219544+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 27312128 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:07.219666+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 27287552 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:08.219802+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 27287552 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:09.219959+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 27287552 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:10.220111+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 27254784 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:11.220261+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 27230208 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:12.220429+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,0,1,2])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 27222016 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:13.220608+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104684 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 27189248 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:14.220780+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.567575455s of 10.004065514s, submitted: 174
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 27131904 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:15.220948+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 27099136 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:16.221132+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 27066368 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:17.221323+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:18.221496+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:19.221693+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:20.223018+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:21.223295+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:22.223659+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:23.223872+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:24.224948+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:25.225112+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:26.225296+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:27.225562+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:28.226001+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:29.226131+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:30.226341+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:31.226544+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 27033600 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:32.226666+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:33.226804+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:34.226961+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:35.227102+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:36.227239+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:37.227410+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:38.227546+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:39.227665+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:40.227819+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:41.227935+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:42.228126+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:43.228300+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:44.228443+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:45.228602+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:46.228831+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:47.229199+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:48.229437+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:49.229717+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:50.229931+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:51.230158+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:52.230315+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:53.230492+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:54.230602+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:55.230694+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:56.230830+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:57.231073+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:58.231231+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:32:59.231476+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:00.231650+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:01.231804+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 27017216 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:02.231947+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 27009024 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:03.232099+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 27009024 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:04.232240+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 27009024 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:05.232427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 27009024 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:06.232616+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 27009024 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:07.232849+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 27009024 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:08.233000+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 27009024 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:09.233270+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 27009024 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:10.233628+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 27009024 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:11.233791+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 27000832 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:12.233994+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26992640 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:13.234259+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26992640 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:14.234485+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26992640 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:15.234637+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26992640 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:16.234802+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26992640 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:17.234982+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26992640 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:18.235128+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26992640 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:19.235263+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26992640 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:20.235414+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26992640 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:21.235575+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26984448 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:22.235685+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26984448 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:23.235850+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26984448 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:24.236206+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26984448 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:25.237223+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26984448 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:26.237394+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26984448 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:27.239091+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:28.240091+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:29.240544+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:30.241012+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:31.241333+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:32.241663+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:33.241975+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:34.242252+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:35.242531+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:36.242705+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26976256 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:37.242897+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26968064 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:38.243141+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26968064 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:39.243364+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26968064 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:40.243455+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26968064 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:41.243606+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26968064 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:42.243776+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26968064 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:43.243990+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26968064 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:44.244139+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26968064 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:45.244326+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26959872 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:46.244564+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26959872 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:47.244785+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26959872 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:48.244989+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26959872 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:49.245161+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26959872 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:50.245343+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26959872 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:51.245572+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26959872 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:52.245767+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26959872 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:53.246710+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:54.246860+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:55.247063+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:56.247194+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:57.247427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:58.247648+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:33:59.247807+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:00.247987+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:01.248180+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:02.248415+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:03.248518+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 26951680 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:04.248761+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:05.248892+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:06.249030+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:07.249211+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:08.249389+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:09.249525+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:10.249643+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:11.249818+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets getting new tickets!
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:12.250179+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _finish_auth 0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:12.251237+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:13.250308+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:14.250457+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:15.250601+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:16.250693+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 26943488 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:17.250906+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 26935296 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:18.251043+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 26935296 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:19.251177+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 26935296 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:20.251328+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 26935296 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:21.251459+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 26935296 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:22.251625+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 26935296 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:23.251793+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 26935296 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:24.252110+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 26935296 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:25.252285+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 26935296 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:26.252426+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:27.252595+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:28.252735+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:29.253202+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:30.253601+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:31.253933+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:32.254164+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:33.254393+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:34.254532+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:35.254678+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:36.254822+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:37.255002+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 26927104 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:38.255680+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:39.256184+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:40.256438+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:41.256706+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:42.256819+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:43.256958+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:44.257434+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:45.257714+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:46.257897+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:47.258186+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 26918912 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:48.258425+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 26910720 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:49.258554+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 26910720 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:50.259066+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 26910720 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:51.259301+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 26910720 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:52.259493+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 26910720 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:53.259649+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 26910720 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:54.259772+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 26910720 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:55.259940+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 26910720 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:56.260184+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 26910720 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:57.260435+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:58.260578+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:34:59.260748+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:00.260909+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:01.261065+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:02.261228+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:03.261390+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:04.261511+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:05.261650+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:06.261806+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 26902528 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:07.261967+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 26894336 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:08.262093+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 26894336 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:09.262222+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 26894336 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:10.262427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 26894336 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:11.262553+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 26894336 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:12.262728+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 26894336 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:13.262888+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:14.263040+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:15.263197+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:16.263324+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:17.263511+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:18.263679+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:19.263835+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:20.264001+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:21.264153+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:22.264372+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26886144 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:23.264542+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:24.264673+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:25.264838+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:26.264964+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:27.265206+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:28.265336+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:29.265527+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:30.265714+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:31.265908+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:32.266094+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:33.266918+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:34.267579+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:35.267794+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26877952 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:36.267943+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26869760 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:37.268441+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26869760 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:38.268856+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26869760 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:39.269078+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26869760 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:40.269446+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26869760 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:41.269726+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26869760 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:42.269985+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26869760 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:43.270199+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:44.270415+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:45.270583+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:46.270734+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:47.271028+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:48.271335+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:49.271757+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:50.272128+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:51.272503+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:52.272802+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:53.273024+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:54.273286+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26861568 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:55.273457+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26853376 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:56.273604+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26853376 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:57.273911+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26853376 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:58.274077+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26853376 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:35:59.274251+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26853376 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:00.274440+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26845184 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:01.274588+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26845184 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:02.274841+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26845184 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:03.275049+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26845184 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:04.275232+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26845184 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:05.275384+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26836992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:06.275529+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26836992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:07.275725+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26836992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:08.275846+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26836992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:09.276015+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26836992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:10.276156+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26836992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:11.276330+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26836992 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:12.276522+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26828800 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:13.276678+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26828800 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:14.276811+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26828800 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:15.276889+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26828800 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:16.277045+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26828800 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:17.277224+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26828800 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:18.277423+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:19.277600+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26828800 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:20.277716+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 26828800 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:21.277865+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 26820608 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:22.277994+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 26820608 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:23.278101+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 26820608 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:24.278260+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 26812416 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:25.278468+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 26812416 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:26.278635+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 26812416 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:27.278803+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 26812416 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:28.278938+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 26812416 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:29.279069+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 26812416 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:30.279230+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 26812416 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:31.279404+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:32.279590+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:33.279854+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:34.280049+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:35.280242+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:36.280422+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:37.280646+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:38.280838+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:39.280945+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:40.281106+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 26804224 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:41.281447+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 26796032 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:42.281615+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 26796032 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:43.281740+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 26796032 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:44.281911+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 26796032 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8838000 session 0x55c0a956fc20
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8838000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:45.282093+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 26796032 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8838800 session 0x55c0a9528000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8839000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:46.282180+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 26796032 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:47.282384+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 26796032 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:48.282523+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 26796032 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:49.282621+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 26787840 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:50.282769+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 26787840 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:51.282921+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 26787840 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:52.283082+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 26787840 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8882c00 session 0x55c0a51eaf00
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a886f800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:53.283200+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26779648 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:54.283412+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26779648 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:55.283668+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26779648 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:56.283843+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26779648 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:57.284070+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:58.284229+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:36:59.284427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:00.284585+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:01.284747+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 ms_handle_reset con 0x55c0a8af2000 session 0x55c0a8700000
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: handle_auth_request added challenge on 0x55c0a8af2800
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:02.284946+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:03.285096+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:04.285259+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:05.285408+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:06.285539+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:07.285729+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26771456 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:08.285902+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 26763264 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:09.286088+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 26763264 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:10.286268+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 26763264 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:11.286427+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:12.286569+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:13.287687+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:14.290133+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:15.290292+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:16.290616+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:17.290934+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:18.291658+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:19.291932+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:20.292535+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:21.292718+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:22.292949+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:23.293338+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:24.293600+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:25.293820+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 26755072 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:26.294108+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:27.294396+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:28.294562+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:29.294792+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:30.295091+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:31.295398+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:32.295558+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:33.295737+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:34.295936+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:35.296073+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 26746880 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:36.296228+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:37.296417+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:38.296636+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:39.296803+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:40.297018+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:41.297197+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:42.297816+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:43.298503+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:44.299071+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:45.299480+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:46.299732+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:47.300092+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:48.300457+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:49.300682+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:50.300915+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:51.301095+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:52.301440+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:53.301601+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:54.301729+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:55.301884+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:56.302042+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:57.302242+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 26738688 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:58.302406+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:37:59.302588+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:00.302728+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:01.302869+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:02.303036+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:03.303182+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:04.303335+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:05.303494+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:06.303656+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:07.303859+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:08.303991+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 26730496 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:09.304151+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 26722304 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:10.304326+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 26722304 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:11.304560+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 26722304 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:12.304759+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 26722304 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:13.304926+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:14.305187+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:15.305465+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:16.305633+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:17.305826+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:18.306022+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:19.306243+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:20.306399+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:21.306652+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:22.306827+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:23.306963+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:24.307139+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:25.307392+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:26.307550+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:27.307736+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:28.307956+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:29.310670+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:30.310887+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:31.311096+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:32.311250+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:33.311400+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:34.311588+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:35.311727+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:36.311899+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:37.312112+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:38.312259+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:39.312441+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:40.312643+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:41.312805+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:42.312993+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:43.313155+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:44.313398+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:45.313626+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:46.313835+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:47.314079+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:48.314942+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 26714112 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:49.316812+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26705920 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:50.317437+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26705920 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:51.317669+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26705920 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:52.317980+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26705920 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:53.318219+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26705920 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:54.318588+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26705920 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:55.319089+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26705920 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:56.319558+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26705920 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:57.320189+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26705920 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:58.320329+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:38:59.320539+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:00.320693+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:01.320837+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:02.321053+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:03.321293+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:04.321518+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:05.321828+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:06.322027+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:07.322313+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:08.322464+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:09.322690+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 26697728 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:10.322915+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:11.323127+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3181 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 435 writes, 682 keys, 435 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 435 writes, 206 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:12.323406+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:13.323635+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:14.323852+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:15.324075+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:16.324291+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:17.324529+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:18.324761+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:19.325335+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:20.325792+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:21.326074+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:22.326253+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:23.327163+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:24.327846+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:25.328098+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:26.328582+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:27.329099+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:28.329870+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:29.330128+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:30.330440+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:31.331009+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:32.331398+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26689536 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:33.331689+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:34.331837+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:35.331988+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:36.332267+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:37.332640+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:38.332815+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:39.333067+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:40.333320+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:41.333481+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:42.333804+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:43.334088+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:44.334337+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:45.334579+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:46.334799+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:47.335045+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:48.335501+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:49.335742+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:50.335909+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:51.336121+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:52.337129+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:53.338000+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:54.338614+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:55.338911+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:56.339036+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:57.339708+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:58.339894+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:39:59.340381+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:00.340835+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:01.341128+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:02.341745+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:03.341997+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:04.342561+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:05.342831+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:06.343181+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:07.343463+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:08.343597+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 26681344 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:09.343922+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 26673152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:10.344258+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 26673152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:11.344469+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 26673152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:12.344729+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 26673152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:13.344958+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 26673152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:14.345145+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 26673152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:15.345414+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 26673152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:16.345678+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 26673152 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:17.345915+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 26664960 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:18.346051+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 26664960 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:19.346236+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 26664960 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:20.346482+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 26664960 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:21.346651+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 26664960 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:22.346803+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 26664960 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:23.346945+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 26664960 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:24.347132+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 26664960 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:25.347291+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 26664960 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 10:40:59 compute-0 ceph-osd[82641]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 10:40:59 compute-0 ceph-osd[82641]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104468 data_alloc: 218103808 data_used: 4792320
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:26.347449+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'config diff' '{prefix=config diff}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 26648576 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'config show' '{prefix=config show}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:27.347629+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 26787840 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x56c43f/0x629000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: tick
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_tickets
Jan 23 10:40:59 compute-0 ceph-osd[82641]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-23T10:40:28.347754+0000)
Jan 23 10:40:59 compute-0 ceph-osd[82641]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 26443776 heap: 143933440 old mem: 2845415832 new mem: 2845415832
Jan 23 10:40:59 compute-0 ceph-osd[82641]: do_command 'log dump' '{prefix=log dump}'
Jan 23 10:40:59 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27185 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 10:40:59 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:40:59 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:40:59 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:59.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:40:59 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17862 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27403 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 23 10:40:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477516146' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.17787 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.27331 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.17808 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.27346 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3124684249' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1271257117' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.17820 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.27361 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/445151465' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3276688100' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/666089249' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.17841 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1254495411' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.27385 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.27185 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1212026481' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1596010367' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: from='client.17862 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:40:59 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27200 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:40:59.798 161921 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 23 10:40:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:40:59.799 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 23 10:40:59 compute-0 ovn_metadata_agent[161916]: 2026-01-23 10:40:59.799 161921 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 23 10:40:59 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17871 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27418 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 23 10:40:59 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044172642' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:40:59 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:40:59 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:59] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:40:59 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:40:59] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 23 10:41:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27221 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17886 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27436 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 23 10:41:00 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564409329' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:41:00 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:00 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:41:00 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:00.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:41:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27239 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.27403 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3477516146' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.27200 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.17871 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/239682889' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1315781543' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.27418 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2044172642' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.27221 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.17886 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1745865103' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.27436 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2564409329' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/716455899' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17910 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:00 compute-0 crontab[291888]: (root) LIST (root)
Jan 23 10:41:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27457 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17925 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:00 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27254 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27469 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:01 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:01 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:41:01 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:01.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:41:01 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17937 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27269 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27484 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.27239 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.17910 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.27457 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/875828868' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2414639810' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/148398286' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.17925 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.27254 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.27469 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.17937 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1085532514' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: from='client.27269 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17949 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 23 10:41:01 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/887272108' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 23 10:41:01 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27278 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:01 compute-0 nova_compute[249229]: 2026-01-23 10:41:01.952 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:41:01 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27496 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:02 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.17961 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 23 10:41:02 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110235279' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 23 10:41:02 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27287 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:02 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:02 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:41:02 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:02.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:41:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 23 10:41:02 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/312762499' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 23 10:41:02 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 23 10:41:02 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/420082444' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 23 10:41:02 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27302 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27314 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.27484 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1637687014' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.17949 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/887272108' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.27278 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.27496 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/712991638' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/233774706' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.17961 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3110235279' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.27287 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3108936715' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1660365616' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/312762499' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/420082444' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:03 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:41:03 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:03.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:41:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 23 10:41:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236167764' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 23 10:41:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644214128' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:41:03 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27326 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 23 10:41:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222612964' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 23 10:41:03 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:41:03.809Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:41:03 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 23 10:41:03 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452223012' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 nova_compute[249229]: 2026-01-23 10:41:04.127 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:41:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 23 10:41:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1233404613' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 23 10:41:04 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:04 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:41:04 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:04.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:41:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 23 10:41:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117015144' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 systemd[1]: Starting Hostname Service...
Jan 23 10:41:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 23 10:41:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/674797361' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 10:41:04 compute-0 systemd[1]: Started Hostname Service.
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.27302 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3345460299' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1102334395' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2934036653' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.27314 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2385619581' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/369562290' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3236167764' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2272500806' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1644214128' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/222612964' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3665764273' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/4166326524' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1452223012' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3579800927' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2110475920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2527938535' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/3690345591' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1233404613' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 23 10:41:04 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359288182' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 23 10:41:04 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:41:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 10:41:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663519361' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 23 10:41:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:41:05 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:05 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:41:05 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:05.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:41:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 23 10:41:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144523971' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 23 10:41:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4016238504' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:41:05 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 23 10:41:05 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327562445' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18081 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.27326 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3136483563' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3117015144' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/674797361' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/924431456' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4081738896' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2279399780' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3359288182' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2663519361' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1474099025' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/265346978' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2144523971' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4016238504' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 10:41:05 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3327562445' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 10:41:06 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18099 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 23 10:41:06 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1078464107' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 10:41:06 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18105 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:06 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:06 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:41:06 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:41:06 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 23 10:41:06 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1979006449' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 10:41:06 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18120 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:06 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27637 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:06 compute-0 nova_compute[249229]: 2026-01-23 10:41:06.955 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:41:06 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27655 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27664 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:07 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:07 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:41:07 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:07.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:41:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 23 10:41:07 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2907186737' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:41:07 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18147 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27673 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/631640421' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.18081 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/142027931' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1856915728' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.18099 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1078464107' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.18105 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/444520388' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1979006449' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/516926078' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 23 10:41:07 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1493572992' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 23 10:41:07 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:41:07.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:41:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27428 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27691 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 23 10:41:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109945965' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:08 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 10:41:08 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 10:41:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18177 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27440 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27703 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 23 10:41:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166585728' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27446 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.18120 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.27637 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.27655 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2337187949' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.27416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.27664 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2907186737' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.18147 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.27673 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4288764517' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1493572992' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.18165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.27428 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.27691 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/1961194654' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2866924100' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/2109945965' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4166585728' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:08 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18195 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:08 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27721 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-alertmanager-compute-0[104185]: ts=2026-01-23T10:41:09.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 23 10:41:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:09 compute-0 nova_compute[249229]: 2026-01-23 10:41:09.130 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:41:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27461 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:09 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:09 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:41:09 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:41:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27739 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:09 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:41:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27473 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:09 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27775 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:09 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:09 compute-0 ceph-mon[74335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 10:41:09 compute-0 ceph-f3005f84-239a-55b6-a948-8f1fb592b920-mgr-compute-0-nbdygh[74629]: ::ffff:192.168.122.100 - - [23/Jan/2026:10:41:09] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:41:09 compute-0 ceph-mgr[74633]: [prometheus INFO cherrypy.access.139810979899424] ::ffff:192.168.122.100 - - [23/Jan/2026:10:41:09] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 23 10:41:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 23 10:41:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4243432763' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.18177 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.27440 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.27703 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.27446 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/704825705' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.18195 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.27721 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.27461 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/2035341214' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.27739 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/4171598296' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:10 compute-0 ceph-mon[74335]: from='client.? 192.168.122.102:0/1386926707' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27497 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18273 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:10 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:10 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:41:10 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:10.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:41:10 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27503 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:10 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 23 10:41:10 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453822127' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 23 10:41:11 compute-0 sudo[293272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 23 10:41:11 compute-0 sudo[293272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:41:11 compute-0 sudo[293272]: pam_unix(sudo:session): session closed for user root
Jan 23 10:41:11 compute-0 sudo[293318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f3005f84-239a-55b6-a948-8f1fb592b920/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 23 10:41:11 compute-0 sudo[293318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:41:11 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27524 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='client.27473 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='client.27775 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/4243432763' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/3513160830' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='client.27497 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='client.18273 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2845937288' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3453822127' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 23 10:41:11 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:11 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 10:41:11 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:11.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 10:41:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Jan 23 10:41:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307632022' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 23 10:41:11 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.27536 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:11 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1446: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 23 10:41:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:11 compute-0 sudo[293318]: pam_unix(sudo:session): session closed for user root
Jan 23 10:41:11 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 23 10:41:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230289181' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 23 10:41:11 compute-0 sudo[293434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 23 10:41:11 compute-0 sudo[293434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 23 10:41:11 compute-0 sudo[293434]: pam_unix(sudo:session): session closed for user root
Jan 23 10:41:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:11 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:11 compute-0 nova_compute[249229]: 2026-01-23 10:41:11.959 249233 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 23 10:41:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:41:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 23 10:41:12 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 23 10:41:12 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826576521' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 23 10:41:12 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:12 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:41:12 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:12.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:41:12 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:41:12 compute-0 ceph-mgr[74633]: log_channel(audit) log [DBG] : from='client.18345 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 23 10:41:13 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2164360078' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 23 10:41:13 compute-0 radosgw[93748]: ====== starting new request req=0x7fa5c588c5d0 =====
Jan 23 10:41:13 compute-0 radosgw[93748]: ====== req done req=0x7fa5c588c5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 10:41:13 compute-0 radosgw[93748]: beast: 0x7fa5c588c5d0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='client.27503 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='client.27524 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.101:0/2100362027' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/3307632022' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/230289181' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:41:13 compute-0 ceph-mon[74335]: from='client.? 192.168.122.100:0/1826576521' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 23 10:41:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1447: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 23 10:41:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 23 10:41:13 compute-0 ceph-mon[74335]: log_channel(audit) log [DBG] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 10:41:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 23 10:41:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 10:41:13 compute-0 ceph-mgr[74633]: log_channel(cluster) log [DBG] : pgmap v1448: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 608 B/s rd, 0 op/s
Jan 23 10:41:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 23 10:41:13 compute-0 ceph-mon[74335]: log_channel(audit) log [INF] : from='mgr.14604 192.168.122.100:0/2333519895' entity='mgr.compute-0.nbdygh' 
Jan 23 10:41:13 compute-0 ceph-mon[74335]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
